Jan 26 11:50:37 np0005596060 kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 26 11:50:37 np0005596060 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 26 11:50:37 np0005596060 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 11:50:37 np0005596060 kernel: BIOS-provided physical RAM map:
Jan 26 11:50:37 np0005596060 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 26 11:50:37 np0005596060 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 26 11:50:37 np0005596060 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 26 11:50:37 np0005596060 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 26 11:50:37 np0005596060 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 26 11:50:37 np0005596060 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 26 11:50:37 np0005596060 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 26 11:50:37 np0005596060 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 26 11:50:37 np0005596060 kernel: NX (Execute Disable) protection: active
Jan 26 11:50:37 np0005596060 kernel: APIC: Static calls initialized
Jan 26 11:50:37 np0005596060 kernel: SMBIOS 2.8 present.
Jan 26 11:50:37 np0005596060 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 26 11:50:37 np0005596060 kernel: Hypervisor detected: KVM
Jan 26 11:50:37 np0005596060 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 26 11:50:37 np0005596060 kernel: kvm-clock: using sched offset of 3232828731 cycles
Jan 26 11:50:37 np0005596060 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 26 11:50:37 np0005596060 kernel: tsc: Detected 2800.000 MHz processor
Jan 26 11:50:37 np0005596060 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 26 11:50:37 np0005596060 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 26 11:50:37 np0005596060 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 26 11:50:37 np0005596060 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 26 11:50:37 np0005596060 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 26 11:50:37 np0005596060 kernel: Using GB pages for direct mapping
Jan 26 11:50:37 np0005596060 kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 26 11:50:37 np0005596060 kernel: ACPI: Early table checksum verification disabled
Jan 26 11:50:37 np0005596060 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 26 11:50:37 np0005596060 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 11:50:37 np0005596060 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 11:50:37 np0005596060 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 11:50:37 np0005596060 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 26 11:50:37 np0005596060 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 11:50:37 np0005596060 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 11:50:37 np0005596060 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 26 11:50:37 np0005596060 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 26 11:50:37 np0005596060 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 26 11:50:37 np0005596060 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 26 11:50:37 np0005596060 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 26 11:50:37 np0005596060 kernel: No NUMA configuration found
Jan 26 11:50:37 np0005596060 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 26 11:50:37 np0005596060 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 26 11:50:37 np0005596060 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 26 11:50:37 np0005596060 kernel: Zone ranges:
Jan 26 11:50:37 np0005596060 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 26 11:50:37 np0005596060 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 26 11:50:37 np0005596060 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 26 11:50:37 np0005596060 kernel:  Device   empty
Jan 26 11:50:37 np0005596060 kernel: Movable zone start for each node
Jan 26 11:50:37 np0005596060 kernel: Early memory node ranges
Jan 26 11:50:37 np0005596060 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 26 11:50:37 np0005596060 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 26 11:50:37 np0005596060 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 26 11:50:37 np0005596060 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 26 11:50:37 np0005596060 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 26 11:50:37 np0005596060 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 26 11:50:37 np0005596060 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 26 11:50:37 np0005596060 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 26 11:50:37 np0005596060 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 26 11:50:37 np0005596060 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 26 11:50:37 np0005596060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 26 11:50:37 np0005596060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 26 11:50:37 np0005596060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 26 11:50:37 np0005596060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 26 11:50:37 np0005596060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 26 11:50:37 np0005596060 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 26 11:50:37 np0005596060 kernel: TSC deadline timer available
Jan 26 11:50:37 np0005596060 kernel: CPU topo: Max. logical packages:   8
Jan 26 11:50:37 np0005596060 kernel: CPU topo: Max. logical dies:       8
Jan 26 11:50:37 np0005596060 kernel: CPU topo: Max. dies per package:   1
Jan 26 11:50:37 np0005596060 kernel: CPU topo: Max. threads per core:   1
Jan 26 11:50:37 np0005596060 kernel: CPU topo: Num. cores per package:     1
Jan 26 11:50:37 np0005596060 kernel: CPU topo: Num. threads per package:   1
Jan 26 11:50:37 np0005596060 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 26 11:50:37 np0005596060 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 26 11:50:37 np0005596060 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 26 11:50:37 np0005596060 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 26 11:50:37 np0005596060 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 26 11:50:37 np0005596060 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 26 11:50:37 np0005596060 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 26 11:50:37 np0005596060 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 26 11:50:37 np0005596060 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 26 11:50:37 np0005596060 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 26 11:50:37 np0005596060 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 26 11:50:37 np0005596060 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 26 11:50:37 np0005596060 kernel: Booting paravirtualized kernel on KVM
Jan 26 11:50:37 np0005596060 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 26 11:50:37 np0005596060 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 26 11:50:37 np0005596060 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 26 11:50:37 np0005596060 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 26 11:50:37 np0005596060 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 11:50:37 np0005596060 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 26 11:50:37 np0005596060 kernel: random: crng init done
Jan 26 11:50:37 np0005596060 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: Fallback order for Node 0: 0 
Jan 26 11:50:37 np0005596060 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 26 11:50:37 np0005596060 kernel: Policy zone: Normal
Jan 26 11:50:37 np0005596060 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 26 11:50:37 np0005596060 kernel: software IO TLB: area num 8.
Jan 26 11:50:37 np0005596060 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 26 11:50:37 np0005596060 kernel: ftrace: allocating 49417 entries in 194 pages
Jan 26 11:50:37 np0005596060 kernel: ftrace: allocated 194 pages with 3 groups
Jan 26 11:50:37 np0005596060 kernel: Dynamic Preempt: voluntary
Jan 26 11:50:37 np0005596060 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 26 11:50:37 np0005596060 kernel: rcu: #011RCU event tracing is enabled.
Jan 26 11:50:37 np0005596060 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 26 11:50:37 np0005596060 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 26 11:50:37 np0005596060 kernel: #011Rude variant of Tasks RCU enabled.
Jan 26 11:50:37 np0005596060 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 26 11:50:37 np0005596060 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 26 11:50:37 np0005596060 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 26 11:50:37 np0005596060 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 11:50:37 np0005596060 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 11:50:37 np0005596060 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 11:50:37 np0005596060 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 26 11:50:37 np0005596060 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 26 11:50:37 np0005596060 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 26 11:50:37 np0005596060 kernel: Console: colour VGA+ 80x25
Jan 26 11:50:37 np0005596060 kernel: printk: console [ttyS0] enabled
Jan 26 11:50:37 np0005596060 kernel: ACPI: Core revision 20230331
Jan 26 11:50:37 np0005596060 kernel: APIC: Switch to symmetric I/O mode setup
Jan 26 11:50:37 np0005596060 kernel: x2apic enabled
Jan 26 11:50:37 np0005596060 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 26 11:50:37 np0005596060 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 26 11:50:37 np0005596060 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 26 11:50:37 np0005596060 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 26 11:50:37 np0005596060 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 26 11:50:37 np0005596060 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 26 11:50:37 np0005596060 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 26 11:50:37 np0005596060 kernel: Spectre V2 : Mitigation: Retpolines
Jan 26 11:50:37 np0005596060 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 26 11:50:37 np0005596060 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 26 11:50:37 np0005596060 kernel: RETBleed: Mitigation: untrained return thunk
Jan 26 11:50:37 np0005596060 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 26 11:50:37 np0005596060 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 26 11:50:37 np0005596060 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 26 11:50:37 np0005596060 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 26 11:50:37 np0005596060 kernel: x86/bugs: return thunk changed
Jan 26 11:50:37 np0005596060 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 26 11:50:37 np0005596060 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 26 11:50:37 np0005596060 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 26 11:50:37 np0005596060 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 26 11:50:37 np0005596060 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 26 11:50:37 np0005596060 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 26 11:50:37 np0005596060 kernel: Freeing SMP alternatives memory: 40K
Jan 26 11:50:37 np0005596060 kernel: pid_max: default: 32768 minimum: 301
Jan 26 11:50:37 np0005596060 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 26 11:50:37 np0005596060 kernel: landlock: Up and running.
Jan 26 11:50:37 np0005596060 kernel: Yama: becoming mindful.
Jan 26 11:50:37 np0005596060 kernel: SELinux:  Initializing.
Jan 26 11:50:37 np0005596060 kernel: LSM support for eBPF active
Jan 26 11:50:37 np0005596060 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 26 11:50:37 np0005596060 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 26 11:50:37 np0005596060 kernel: ... version:                0
Jan 26 11:50:37 np0005596060 kernel: ... bit width:              48
Jan 26 11:50:37 np0005596060 kernel: ... generic registers:      6
Jan 26 11:50:37 np0005596060 kernel: ... value mask:             0000ffffffffffff
Jan 26 11:50:37 np0005596060 kernel: ... max period:             00007fffffffffff
Jan 26 11:50:37 np0005596060 kernel: ... fixed-purpose events:   0
Jan 26 11:50:37 np0005596060 kernel: ... event mask:             000000000000003f
Jan 26 11:50:37 np0005596060 kernel: signal: max sigframe size: 1776
Jan 26 11:50:37 np0005596060 kernel: rcu: Hierarchical SRCU implementation.
Jan 26 11:50:37 np0005596060 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 26 11:50:37 np0005596060 kernel: smp: Bringing up secondary CPUs ...
Jan 26 11:50:37 np0005596060 kernel: smpboot: x86: Booting SMP configuration:
Jan 26 11:50:37 np0005596060 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 26 11:50:37 np0005596060 kernel: smp: Brought up 1 node, 8 CPUs
Jan 26 11:50:37 np0005596060 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 26 11:50:37 np0005596060 kernel: node 0 deferred pages initialised in 9ms
Jan 26 11:50:37 np0005596060 kernel: Memory: 7763792K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 26 11:50:37 np0005596060 kernel: devtmpfs: initialized
Jan 26 11:50:37 np0005596060 kernel: x86/mm: Memory block size: 128MB
Jan 26 11:50:37 np0005596060 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 26 11:50:37 np0005596060 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 26 11:50:37 np0005596060 kernel: pinctrl core: initialized pinctrl subsystem
Jan 26 11:50:37 np0005596060 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 26 11:50:37 np0005596060 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 26 11:50:37 np0005596060 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 26 11:50:37 np0005596060 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 26 11:50:37 np0005596060 kernel: audit: initializing netlink subsys (disabled)
Jan 26 11:50:37 np0005596060 kernel: audit: type=2000 audit(1769446235.591:1): state=initialized audit_enabled=0 res=1
Jan 26 11:50:37 np0005596060 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 26 11:50:37 np0005596060 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 26 11:50:37 np0005596060 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 26 11:50:37 np0005596060 kernel: cpuidle: using governor menu
Jan 26 11:50:37 np0005596060 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 26 11:50:37 np0005596060 kernel: PCI: Using configuration type 1 for base access
Jan 26 11:50:37 np0005596060 kernel: PCI: Using configuration type 1 for extended access
Jan 26 11:50:37 np0005596060 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 26 11:50:37 np0005596060 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 26 11:50:37 np0005596060 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 26 11:50:37 np0005596060 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 26 11:50:37 np0005596060 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 26 11:50:37 np0005596060 kernel: Demotion targets for Node 0: null
Jan 26 11:50:37 np0005596060 kernel: cryptd: max_cpu_qlen set to 1000
Jan 26 11:50:37 np0005596060 kernel: ACPI: Added _OSI(Module Device)
Jan 26 11:50:37 np0005596060 kernel: ACPI: Added _OSI(Processor Device)
Jan 26 11:50:37 np0005596060 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 26 11:50:37 np0005596060 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 26 11:50:37 np0005596060 kernel: ACPI: Interpreter enabled
Jan 26 11:50:37 np0005596060 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 26 11:50:37 np0005596060 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 26 11:50:37 np0005596060 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 26 11:50:37 np0005596060 kernel: PCI: Using E820 reservations for host bridge windows
Jan 26 11:50:37 np0005596060 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 26 11:50:37 np0005596060 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 26 11:50:37 np0005596060 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [3] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [4] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [5] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [6] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [7] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [8] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [9] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [10] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [11] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [12] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [13] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [14] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [15] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [16] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [17] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [18] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [19] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [20] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [21] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [22] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [23] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [24] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [25] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [26] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [27] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [28] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [29] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [30] registered
Jan 26 11:50:37 np0005596060 kernel: acpiphp: Slot [31] registered
Jan 26 11:50:37 np0005596060 kernel: PCI host bridge to bus 0000:00
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 26 11:50:37 np0005596060 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 26 11:50:37 np0005596060 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 26 11:50:37 np0005596060 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 26 11:50:37 np0005596060 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 26 11:50:37 np0005596060 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 26 11:50:37 np0005596060 kernel: iommu: Default domain type: Translated
Jan 26 11:50:37 np0005596060 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 26 11:50:37 np0005596060 kernel: SCSI subsystem initialized
Jan 26 11:50:37 np0005596060 kernel: ACPI: bus type USB registered
Jan 26 11:50:37 np0005596060 kernel: usbcore: registered new interface driver usbfs
Jan 26 11:50:37 np0005596060 kernel: usbcore: registered new interface driver hub
Jan 26 11:50:37 np0005596060 kernel: usbcore: registered new device driver usb
Jan 26 11:50:37 np0005596060 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 26 11:50:37 np0005596060 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 26 11:50:37 np0005596060 kernel: PTP clock support registered
Jan 26 11:50:37 np0005596060 kernel: EDAC MC: Ver: 3.0.0
Jan 26 11:50:37 np0005596060 kernel: NetLabel: Initializing
Jan 26 11:50:37 np0005596060 kernel: NetLabel:  domain hash size = 128
Jan 26 11:50:37 np0005596060 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 26 11:50:37 np0005596060 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 26 11:50:37 np0005596060 kernel: PCI: Using ACPI for IRQ routing
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 26 11:50:37 np0005596060 kernel: vgaarb: loaded
Jan 26 11:50:37 np0005596060 kernel: clocksource: Switched to clocksource kvm-clock
Jan 26 11:50:37 np0005596060 kernel: VFS: Disk quotas dquot_6.6.0
Jan 26 11:50:37 np0005596060 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 26 11:50:37 np0005596060 kernel: pnp: PnP ACPI init
Jan 26 11:50:37 np0005596060 kernel: pnp: PnP ACPI: found 5 devices
Jan 26 11:50:37 np0005596060 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 26 11:50:37 np0005596060 kernel: NET: Registered PF_INET protocol family
Jan 26 11:50:37 np0005596060 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 26 11:50:37 np0005596060 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 26 11:50:37 np0005596060 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 26 11:50:37 np0005596060 kernel: NET: Registered PF_XDP protocol family
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 26 11:50:37 np0005596060 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 26 11:50:37 np0005596060 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 26 11:50:37 np0005596060 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73556 usecs
Jan 26 11:50:37 np0005596060 kernel: PCI: CLS 0 bytes, default 64
Jan 26 11:50:37 np0005596060 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 26 11:50:37 np0005596060 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 26 11:50:37 np0005596060 kernel: Trying to unpack rootfs image as initramfs...
Jan 26 11:50:37 np0005596060 kernel: ACPI: bus type thunderbolt registered
Jan 26 11:50:37 np0005596060 kernel: Initialise system trusted keyrings
Jan 26 11:50:37 np0005596060 kernel: Key type blacklist registered
Jan 26 11:50:37 np0005596060 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 26 11:50:37 np0005596060 kernel: zbud: loaded
Jan 26 11:50:37 np0005596060 kernel: integrity: Platform Keyring initialized
Jan 26 11:50:37 np0005596060 kernel: integrity: Machine keyring initialized
Jan 26 11:50:37 np0005596060 kernel: Freeing initrd memory: 87956K
Jan 26 11:50:37 np0005596060 kernel: NET: Registered PF_ALG protocol family
Jan 26 11:50:37 np0005596060 kernel: xor: automatically using best checksumming function   avx       
Jan 26 11:50:37 np0005596060 kernel: Key type asymmetric registered
Jan 26 11:50:37 np0005596060 kernel: Asymmetric key parser 'x509' registered
Jan 26 11:50:37 np0005596060 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 26 11:50:37 np0005596060 kernel: io scheduler mq-deadline registered
Jan 26 11:50:37 np0005596060 kernel: io scheduler kyber registered
Jan 26 11:50:37 np0005596060 kernel: io scheduler bfq registered
Jan 26 11:50:37 np0005596060 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 26 11:50:37 np0005596060 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 26 11:50:37 np0005596060 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 26 11:50:37 np0005596060 kernel: ACPI: button: Power Button [PWRF]
Jan 26 11:50:37 np0005596060 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 26 11:50:37 np0005596060 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 26 11:50:37 np0005596060 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 26 11:50:37 np0005596060 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 26 11:50:37 np0005596060 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 26 11:50:37 np0005596060 kernel: Non-volatile memory driver v1.3
Jan 26 11:50:37 np0005596060 kernel: rdac: device handler registered
Jan 26 11:50:37 np0005596060 kernel: hp_sw: device handler registered
Jan 26 11:50:37 np0005596060 kernel: emc: device handler registered
Jan 26 11:50:37 np0005596060 kernel: alua: device handler registered
Jan 26 11:50:37 np0005596060 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 26 11:50:37 np0005596060 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 26 11:50:37 np0005596060 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 26 11:50:37 np0005596060 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 26 11:50:37 np0005596060 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 26 11:50:37 np0005596060 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 26 11:50:37 np0005596060 kernel: usb usb1: Product: UHCI Host Controller
Jan 26 11:50:37 np0005596060 kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 26 11:50:37 np0005596060 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 26 11:50:37 np0005596060 kernel: hub 1-0:1.0: USB hub found
Jan 26 11:50:37 np0005596060 kernel: hub 1-0:1.0: 2 ports detected
Jan 26 11:50:37 np0005596060 kernel: usbcore: registered new interface driver usbserial_generic
Jan 26 11:50:37 np0005596060 kernel: usbserial: USB Serial support registered for generic
Jan 26 11:50:37 np0005596060 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 26 11:50:37 np0005596060 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 26 11:50:37 np0005596060 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 26 11:50:37 np0005596060 kernel: mousedev: PS/2 mouse device common for all mice
Jan 26 11:50:37 np0005596060 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 26 11:50:37 np0005596060 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 26 11:50:37 np0005596060 kernel: rtc_cmos 00:04: registered as rtc0
Jan 26 11:50:37 np0005596060 kernel: rtc_cmos 00:04: setting system clock to 2026-01-26T16:50:36 UTC (1769446236)
Jan 26 11:50:37 np0005596060 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 26 11:50:37 np0005596060 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 26 11:50:37 np0005596060 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 26 11:50:37 np0005596060 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 26 11:50:37 np0005596060 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 26 11:50:37 np0005596060 kernel: usbcore: registered new interface driver usbhid
Jan 26 11:50:37 np0005596060 kernel: usbhid: USB HID core driver
Jan 26 11:50:37 np0005596060 kernel: drop_monitor: Initializing network drop monitor service
Jan 26 11:50:37 np0005596060 kernel: Initializing XFRM netlink socket
Jan 26 11:50:37 np0005596060 kernel: NET: Registered PF_INET6 protocol family
Jan 26 11:50:37 np0005596060 kernel: Segment Routing with IPv6
Jan 26 11:50:37 np0005596060 kernel: NET: Registered PF_PACKET protocol family
Jan 26 11:50:37 np0005596060 kernel: mpls_gso: MPLS GSO support
Jan 26 11:50:37 np0005596060 kernel: IPI shorthand broadcast: enabled
Jan 26 11:50:37 np0005596060 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 26 11:50:37 np0005596060 kernel: AES CTR mode by8 optimization enabled
Jan 26 11:50:37 np0005596060 kernel: sched_clock: Marking stable (1177002429, 150092820)->(1407542319, -80447070)
Jan 26 11:50:37 np0005596060 kernel: registered taskstats version 1
Jan 26 11:50:37 np0005596060 kernel: Loading compiled-in X.509 certificates
Jan 26 11:50:37 np0005596060 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 26 11:50:37 np0005596060 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 26 11:50:37 np0005596060 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 26 11:50:37 np0005596060 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 26 11:50:37 np0005596060 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 26 11:50:37 np0005596060 kernel: Demotion targets for Node 0: null
Jan 26 11:50:37 np0005596060 kernel: page_owner is disabled
Jan 26 11:50:37 np0005596060 kernel: Key type .fscrypt registered
Jan 26 11:50:37 np0005596060 kernel: Key type fscrypt-provisioning registered
Jan 26 11:50:37 np0005596060 kernel: Key type big_key registered
Jan 26 11:50:37 np0005596060 kernel: Key type encrypted registered
Jan 26 11:50:37 np0005596060 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 26 11:50:37 np0005596060 kernel: Loading compiled-in module X.509 certificates
Jan 26 11:50:37 np0005596060 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 26 11:50:37 np0005596060 kernel: ima: Allocated hash algorithm: sha256
Jan 26 11:50:37 np0005596060 kernel: ima: No architecture policies found
Jan 26 11:50:37 np0005596060 kernel: evm: Initialising EVM extended attributes:
Jan 26 11:50:37 np0005596060 kernel: evm: security.selinux
Jan 26 11:50:37 np0005596060 kernel: evm: security.SMACK64 (disabled)
Jan 26 11:50:37 np0005596060 kernel: evm: security.SMACK64EXEC (disabled)
Jan 26 11:50:37 np0005596060 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 26 11:50:37 np0005596060 kernel: evm: security.SMACK64MMAP (disabled)
Jan 26 11:50:37 np0005596060 kernel: evm: security.apparmor (disabled)
Jan 26 11:50:37 np0005596060 kernel: evm: security.ima
Jan 26 11:50:37 np0005596060 kernel: evm: security.capability
Jan 26 11:50:37 np0005596060 kernel: evm: HMAC attrs: 0x1
Jan 26 11:50:37 np0005596060 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 26 11:50:37 np0005596060 kernel: Running certificate verification RSA selftest
Jan 26 11:50:37 np0005596060 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 26 11:50:37 np0005596060 kernel: Running certificate verification ECDSA selftest
Jan 26 11:50:37 np0005596060 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 26 11:50:37 np0005596060 kernel: clk: Disabling unused clocks
Jan 26 11:50:37 np0005596060 kernel: Freeing unused decrypted memory: 2028K
Jan 26 11:50:37 np0005596060 kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 26 11:50:37 np0005596060 kernel: Write protecting the kernel read-only data: 30720k
Jan 26 11:50:37 np0005596060 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 26 11:50:37 np0005596060 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 26 11:50:37 np0005596060 kernel: Run /init as init process
Jan 26 11:50:37 np0005596060 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 26 11:50:37 np0005596060 systemd: Detected virtualization kvm.
Jan 26 11:50:37 np0005596060 systemd: Detected architecture x86-64.
Jan 26 11:50:37 np0005596060 systemd: Running in initrd.
Jan 26 11:50:37 np0005596060 systemd: No hostname configured, using default hostname.
Jan 26 11:50:37 np0005596060 systemd: Hostname set to <localhost>.
Jan 26 11:50:37 np0005596060 systemd: Initializing machine ID from VM UUID.
Jan 26 11:50:37 np0005596060 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 26 11:50:37 np0005596060 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 26 11:50:37 np0005596060 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 26 11:50:37 np0005596060 kernel: usb 1-1: Manufacturer: QEMU
Jan 26 11:50:37 np0005596060 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 26 11:50:37 np0005596060 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 26 11:50:37 np0005596060 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 26 11:50:37 np0005596060 systemd: Queued start job for default target Initrd Default Target.
Jan 26 11:50:37 np0005596060 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 26 11:50:37 np0005596060 systemd: Reached target Local Encrypted Volumes.
Jan 26 11:50:37 np0005596060 systemd: Reached target Initrd /usr File System.
Jan 26 11:50:37 np0005596060 systemd: Reached target Local File Systems.
Jan 26 11:50:37 np0005596060 systemd: Reached target Path Units.
Jan 26 11:50:37 np0005596060 systemd: Reached target Slice Units.
Jan 26 11:50:37 np0005596060 systemd: Reached target Swaps.
Jan 26 11:50:37 np0005596060 systemd: Reached target Timer Units.
Jan 26 11:50:37 np0005596060 systemd: Listening on D-Bus System Message Bus Socket.
Jan 26 11:50:37 np0005596060 systemd: Listening on Journal Socket (/dev/log).
Jan 26 11:50:37 np0005596060 systemd: Listening on Journal Socket.
Jan 26 11:50:37 np0005596060 systemd: Listening on udev Control Socket.
Jan 26 11:50:37 np0005596060 systemd: Listening on udev Kernel Socket.
Jan 26 11:50:37 np0005596060 systemd: Reached target Socket Units.
Jan 26 11:50:37 np0005596060 systemd: Starting Create List of Static Device Nodes...
Jan 26 11:50:37 np0005596060 systemd: Starting Journal Service...
Jan 26 11:50:37 np0005596060 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 26 11:50:37 np0005596060 systemd: Starting Apply Kernel Variables...
Jan 26 11:50:37 np0005596060 systemd: Starting Create System Users...
Jan 26 11:50:37 np0005596060 systemd: Starting Setup Virtual Console...
Jan 26 11:50:37 np0005596060 systemd: Finished Create List of Static Device Nodes.
Jan 26 11:50:37 np0005596060 systemd: Finished Apply Kernel Variables.
Jan 26 11:50:37 np0005596060 systemd-journald[305]: Journal started
Jan 26 11:50:37 np0005596060 systemd-journald[305]: Runtime Journal (/run/log/journal/d27b7a4130de40e49f10b4e4f5902919) is 8.0M, max 153.6M, 145.6M free.
Jan 26 11:50:37 np0005596060 systemd: Started Journal Service.
Jan 26 11:50:37 np0005596060 systemd-sysusers[310]: Creating group 'users' with GID 100.
Jan 26 11:50:37 np0005596060 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Jan 26 11:50:37 np0005596060 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 26 11:50:37 np0005596060 systemd[1]: Finished Create System Users.
Jan 26 11:50:37 np0005596060 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 26 11:50:37 np0005596060 systemd[1]: Starting Create Volatile Files and Directories...
Jan 26 11:50:37 np0005596060 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 26 11:50:37 np0005596060 systemd[1]: Finished Create Volatile Files and Directories.
Jan 26 11:50:37 np0005596060 systemd[1]: Finished Setup Virtual Console.
Jan 26 11:50:37 np0005596060 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 26 11:50:37 np0005596060 systemd[1]: Starting dracut cmdline hook...
Jan 26 11:50:37 np0005596060 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Jan 26 11:50:37 np0005596060 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 11:50:37 np0005596060 systemd[1]: Finished dracut cmdline hook.
Jan 26 11:50:37 np0005596060 systemd[1]: Starting dracut pre-udev hook...
Jan 26 11:50:37 np0005596060 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 26 11:50:37 np0005596060 kernel: device-mapper: uevent: version 1.0.3
Jan 26 11:50:37 np0005596060 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 26 11:50:37 np0005596060 kernel: RPC: Registered named UNIX socket transport module.
Jan 26 11:50:37 np0005596060 kernel: RPC: Registered udp transport module.
Jan 26 11:50:37 np0005596060 kernel: RPC: Registered tcp transport module.
Jan 26 11:50:37 np0005596060 kernel: RPC: Registered tcp-with-tls transport module.
Jan 26 11:50:37 np0005596060 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 26 11:50:37 np0005596060 rpc.statd[441]: Version 2.5.4 starting
Jan 26 11:50:37 np0005596060 rpc.statd[441]: Initializing NSM state
Jan 26 11:50:37 np0005596060 rpc.idmapd[446]: Setting log level to 0
Jan 26 11:50:37 np0005596060 systemd[1]: Finished dracut pre-udev hook.
Jan 26 11:50:37 np0005596060 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 26 11:50:37 np0005596060 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Jan 26 11:50:37 np0005596060 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 26 11:50:37 np0005596060 systemd[1]: Starting dracut pre-trigger hook...
Jan 26 11:50:37 np0005596060 systemd[1]: Finished dracut pre-trigger hook.
Jan 26 11:50:37 np0005596060 systemd[1]: Starting Coldplug All udev Devices...
Jan 26 11:50:37 np0005596060 systemd[1]: Finished Coldplug All udev Devices.
Jan 26 11:50:37 np0005596060 systemd[1]: Created slice Slice /system/modprobe.
Jan 26 11:50:37 np0005596060 systemd[1]: Starting Load Kernel Module configfs...
Jan 26 11:50:37 np0005596060 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 26 11:50:37 np0005596060 systemd[1]: Reached target Network.
Jan 26 11:50:37 np0005596060 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 26 11:50:37 np0005596060 systemd[1]: Starting dracut initqueue hook...
Jan 26 11:50:37 np0005596060 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 26 11:50:37 np0005596060 systemd[1]: Finished Load Kernel Module configfs.
Jan 26 11:50:38 np0005596060 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 26 11:50:38 np0005596060 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 26 11:50:38 np0005596060 kernel: vda: vda1
Jan 26 11:50:38 np0005596060 kernel: scsi host0: ata_piix
Jan 26 11:50:38 np0005596060 kernel: scsi host1: ata_piix
Jan 26 11:50:38 np0005596060 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 26 11:50:38 np0005596060 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 26 11:50:38 np0005596060 systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 26 11:50:38 np0005596060 systemd[1]: Reached target Initrd Root Device.
Jan 26 11:50:38 np0005596060 systemd[1]: Mounting Kernel Configuration File System...
Jan 26 11:50:38 np0005596060 kernel: ata1: found unknown device (class 0)
Jan 26 11:50:38 np0005596060 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 26 11:50:38 np0005596060 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 26 11:50:38 np0005596060 systemd[1]: Mounted Kernel Configuration File System.
Jan 26 11:50:38 np0005596060 systemd[1]: Reached target System Initialization.
Jan 26 11:50:38 np0005596060 systemd[1]: Reached target Basic System.
Jan 26 11:50:38 np0005596060 systemd-udevd[481]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 11:50:38 np0005596060 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 26 11:50:38 np0005596060 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 26 11:50:38 np0005596060 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 26 11:50:38 np0005596060 systemd[1]: Finished dracut initqueue hook.
Jan 26 11:50:38 np0005596060 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 26 11:50:38 np0005596060 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 26 11:50:38 np0005596060 systemd[1]: Reached target Remote File Systems.
Jan 26 11:50:38 np0005596060 systemd[1]: Starting dracut pre-mount hook...
Jan 26 11:50:38 np0005596060 systemd[1]: Finished dracut pre-mount hook.
Jan 26 11:50:38 np0005596060 systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 26 11:50:38 np0005596060 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Jan 26 11:50:38 np0005596060 systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 26 11:50:38 np0005596060 systemd[1]: Mounting /sysroot...
Jan 26 11:50:38 np0005596060 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 26 11:50:38 np0005596060 kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 26 11:50:38 np0005596060 kernel: XFS (vda1): Ending clean mount
Jan 26 11:50:38 np0005596060 systemd[1]: Mounted /sysroot.
Jan 26 11:50:38 np0005596060 systemd[1]: Reached target Initrd Root File System.
Jan 26 11:50:38 np0005596060 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 26 11:50:38 np0005596060 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 26 11:50:38 np0005596060 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 26 11:50:38 np0005596060 systemd[1]: Reached target Initrd File Systems.
Jan 26 11:50:38 np0005596060 systemd[1]: Reached target Initrd Default Target.
Jan 26 11:50:38 np0005596060 systemd[1]: Starting dracut mount hook...
Jan 26 11:50:38 np0005596060 systemd[1]: Finished dracut mount hook.
Jan 26 11:50:38 np0005596060 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 26 11:50:39 np0005596060 rpc.idmapd[446]: exiting on signal 15
Jan 26 11:50:39 np0005596060 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 26 11:50:39 np0005596060 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Network.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Timer Units.
Jan 26 11:50:39 np0005596060 systemd[1]: dbus.socket: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 26 11:50:39 np0005596060 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Initrd Default Target.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Basic System.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Initrd Root Device.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Initrd /usr File System.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Path Units.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Remote File Systems.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Slice Units.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Socket Units.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target System Initialization.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Local File Systems.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Swaps.
Jan 26 11:50:39 np0005596060 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped dracut mount hook.
Jan 26 11:50:39 np0005596060 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped dracut pre-mount hook.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 26 11:50:39 np0005596060 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped dracut initqueue hook.
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped Apply Kernel Variables.
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped Coldplug All udev Devices.
Jan 26 11:50:39 np0005596060 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped dracut pre-trigger hook.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped Setup Virtual Console.
Jan 26 11:50:39 np0005596060 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Closed udev Control Socket.
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Closed udev Kernel Socket.
Jan 26 11:50:39 np0005596060 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped dracut pre-udev hook.
Jan 26 11:50:39 np0005596060 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped dracut cmdline hook.
Jan 26 11:50:39 np0005596060 systemd[1]: Starting Cleanup udev Database...
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 26 11:50:39 np0005596060 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Stopped Create System Users.
Jan 26 11:50:39 np0005596060 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Cleanup udev Database.
Jan 26 11:50:39 np0005596060 systemd[1]: Reached target Switch Root.
Jan 26 11:50:39 np0005596060 systemd[1]: Starting Switch Root...
Jan 26 11:50:39 np0005596060 systemd[1]: Switching root.
Jan 26 11:50:39 np0005596060 systemd-journald[305]: Journal stopped
Jan 26 11:50:39 np0005596060 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 26 11:50:39 np0005596060 kernel: audit: type=1404 audit(1769446239.288:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 26 11:50:39 np0005596060 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 11:50:39 np0005596060 kernel: SELinux:  policy capability open_perms=1
Jan 26 11:50:39 np0005596060 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 11:50:39 np0005596060 kernel: SELinux:  policy capability always_check_network=0
Jan 26 11:50:39 np0005596060 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 11:50:39 np0005596060 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 11:50:39 np0005596060 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 11:50:39 np0005596060 kernel: audit: type=1403 audit(1769446239.410:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 26 11:50:39 np0005596060 systemd: Successfully loaded SELinux policy in 124.522ms.
Jan 26 11:50:39 np0005596060 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.131ms.
Jan 26 11:50:39 np0005596060 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 26 11:50:39 np0005596060 systemd: Detected virtualization kvm.
Jan 26 11:50:39 np0005596060 systemd: Detected architecture x86-64.
Jan 26 11:50:39 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 11:50:39 np0005596060 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd: Stopped Switch Root.
Jan 26 11:50:39 np0005596060 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 26 11:50:39 np0005596060 systemd: Created slice Slice /system/getty.
Jan 26 11:50:39 np0005596060 systemd: Created slice Slice /system/serial-getty.
Jan 26 11:50:39 np0005596060 systemd: Created slice Slice /system/sshd-keygen.
Jan 26 11:50:39 np0005596060 systemd: Created slice User and Session Slice.
Jan 26 11:50:39 np0005596060 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 26 11:50:39 np0005596060 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 26 11:50:39 np0005596060 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 26 11:50:39 np0005596060 systemd: Reached target Local Encrypted Volumes.
Jan 26 11:50:39 np0005596060 systemd: Stopped target Switch Root.
Jan 26 11:50:39 np0005596060 systemd: Stopped target Initrd File Systems.
Jan 26 11:50:39 np0005596060 systemd: Stopped target Initrd Root File System.
Jan 26 11:50:39 np0005596060 systemd: Reached target Local Integrity Protected Volumes.
Jan 26 11:50:39 np0005596060 systemd: Reached target Path Units.
Jan 26 11:50:39 np0005596060 systemd: Reached target rpc_pipefs.target.
Jan 26 11:50:39 np0005596060 systemd: Reached target Slice Units.
Jan 26 11:50:39 np0005596060 systemd: Reached target Swaps.
Jan 26 11:50:39 np0005596060 systemd: Reached target Local Verity Protected Volumes.
Jan 26 11:50:39 np0005596060 systemd: Listening on RPCbind Server Activation Socket.
Jan 26 11:50:39 np0005596060 systemd: Reached target RPC Port Mapper.
Jan 26 11:50:39 np0005596060 systemd: Listening on Process Core Dump Socket.
Jan 26 11:50:39 np0005596060 systemd: Listening on initctl Compatibility Named Pipe.
Jan 26 11:50:39 np0005596060 systemd: Listening on udev Control Socket.
Jan 26 11:50:39 np0005596060 systemd: Listening on udev Kernel Socket.
Jan 26 11:50:39 np0005596060 systemd: Mounting Huge Pages File System...
Jan 26 11:50:39 np0005596060 systemd: Mounting POSIX Message Queue File System...
Jan 26 11:50:39 np0005596060 systemd: Mounting Kernel Debug File System...
Jan 26 11:50:39 np0005596060 systemd: Mounting Kernel Trace File System...
Jan 26 11:50:39 np0005596060 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 26 11:50:39 np0005596060 systemd: Starting Create List of Static Device Nodes...
Jan 26 11:50:39 np0005596060 systemd: Starting Load Kernel Module configfs...
Jan 26 11:50:39 np0005596060 systemd: Starting Load Kernel Module drm...
Jan 26 11:50:39 np0005596060 systemd: Starting Load Kernel Module efi_pstore...
Jan 26 11:50:39 np0005596060 systemd: Starting Load Kernel Module fuse...
Jan 26 11:50:39 np0005596060 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 26 11:50:39 np0005596060 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd: Stopped File System Check on Root Device.
Jan 26 11:50:39 np0005596060 systemd: Stopped Journal Service.
Jan 26 11:50:39 np0005596060 kernel: fuse: init (API version 7.37)
Jan 26 11:50:39 np0005596060 systemd: Starting Journal Service...
Jan 26 11:50:39 np0005596060 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 26 11:50:39 np0005596060 systemd: Starting Generate network units from Kernel command line...
Jan 26 11:50:39 np0005596060 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 26 11:50:39 np0005596060 systemd: Starting Remount Root and Kernel File Systems...
Jan 26 11:50:39 np0005596060 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 26 11:50:39 np0005596060 systemd: Starting Apply Kernel Variables...
Jan 26 11:50:39 np0005596060 systemd: Starting Coldplug All udev Devices...
Jan 26 11:50:39 np0005596060 systemd: Mounted Huge Pages File System.
Jan 26 11:50:39 np0005596060 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 26 11:50:39 np0005596060 systemd: Mounted POSIX Message Queue File System.
Jan 26 11:50:39 np0005596060 systemd-journald[676]: Journal started
Jan 26 11:50:39 np0005596060 systemd-journald[676]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 26 11:50:39 np0005596060 systemd[1]: Queued start job for default target Multi-User System.
Jan 26 11:50:39 np0005596060 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd: Started Journal Service.
Jan 26 11:50:39 np0005596060 systemd[1]: Mounted Kernel Debug File System.
Jan 26 11:50:39 np0005596060 systemd[1]: Mounted Kernel Trace File System.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Create List of Static Device Nodes.
Jan 26 11:50:39 np0005596060 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Load Kernel Module configfs.
Jan 26 11:50:39 np0005596060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 26 11:50:39 np0005596060 kernel: ACPI: bus type drm_connector registered
Jan 26 11:50:39 np0005596060 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Load Kernel Module fuse.
Jan 26 11:50:39 np0005596060 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Load Kernel Module drm.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Generate network units from Kernel command line.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Apply Kernel Variables.
Jan 26 11:50:39 np0005596060 systemd[1]: Mounting FUSE Control File System...
Jan 26 11:50:39 np0005596060 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 26 11:50:39 np0005596060 systemd[1]: Starting Rebuild Hardware Database...
Jan 26 11:50:39 np0005596060 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 26 11:50:39 np0005596060 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 26 11:50:39 np0005596060 systemd[1]: Starting Load/Save OS Random Seed...
Jan 26 11:50:39 np0005596060 systemd[1]: Starting Create System Users...
Jan 26 11:50:39 np0005596060 systemd-journald[676]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 26 11:50:39 np0005596060 systemd-journald[676]: Received client request to flush runtime journal.
Jan 26 11:50:39 np0005596060 systemd[1]: Mounted FUSE Control File System.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Load/Save OS Random Seed.
Jan 26 11:50:39 np0005596060 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 26 11:50:39 np0005596060 systemd[1]: Finished Create System Users.
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Coldplug All udev Devices.
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 26 11:50:40 np0005596060 systemd[1]: Reached target Preparation for Local File Systems.
Jan 26 11:50:40 np0005596060 systemd[1]: Reached target Local File Systems.
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 26 11:50:40 np0005596060 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 26 11:50:40 np0005596060 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 26 11:50:40 np0005596060 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Automatic Boot Loader Update...
Jan 26 11:50:40 np0005596060 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Create Volatile Files and Directories...
Jan 26 11:50:40 np0005596060 bootctl[694]: Couldn't find EFI system partition, skipping.
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Automatic Boot Loader Update.
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Create Volatile Files and Directories.
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Security Auditing Service...
Jan 26 11:50:40 np0005596060 systemd[1]: Starting RPC Bind...
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Rebuild Journal Catalog...
Jan 26 11:50:40 np0005596060 auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 26 11:50:40 np0005596060 auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 26 11:50:40 np0005596060 systemd[1]: Started RPC Bind.
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Rebuild Journal Catalog.
Jan 26 11:50:40 np0005596060 augenrules[705]: /sbin/augenrules: No change
Jan 26 11:50:40 np0005596060 augenrules[720]: No rules
Jan 26 11:50:40 np0005596060 augenrules[720]: enabled 1
Jan 26 11:50:40 np0005596060 augenrules[720]: failure 1
Jan 26 11:50:40 np0005596060 augenrules[720]: pid 700
Jan 26 11:50:40 np0005596060 augenrules[720]: rate_limit 0
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog_limit 8192
Jan 26 11:50:40 np0005596060 augenrules[720]: lost 0
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog 1
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog_wait_time 60000
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog_wait_time_actual 0
Jan 26 11:50:40 np0005596060 augenrules[720]: enabled 1
Jan 26 11:50:40 np0005596060 augenrules[720]: failure 1
Jan 26 11:50:40 np0005596060 augenrules[720]: pid 700
Jan 26 11:50:40 np0005596060 augenrules[720]: rate_limit 0
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog_limit 8192
Jan 26 11:50:40 np0005596060 augenrules[720]: lost 0
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog 0
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog_wait_time 60000
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog_wait_time_actual 0
Jan 26 11:50:40 np0005596060 augenrules[720]: enabled 1
Jan 26 11:50:40 np0005596060 augenrules[720]: failure 1
Jan 26 11:50:40 np0005596060 augenrules[720]: pid 700
Jan 26 11:50:40 np0005596060 augenrules[720]: rate_limit 0
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog_limit 8192
Jan 26 11:50:40 np0005596060 augenrules[720]: lost 0
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog 0
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog_wait_time 60000
Jan 26 11:50:40 np0005596060 augenrules[720]: backlog_wait_time_actual 0
Jan 26 11:50:40 np0005596060 systemd[1]: Started Security Auditing Service.
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Rebuild Hardware Database.
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Update is Completed...
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Update is Completed.
Jan 26 11:50:40 np0005596060 systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Jan 26 11:50:40 np0005596060 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 26 11:50:40 np0005596060 systemd[1]: Reached target System Initialization.
Jan 26 11:50:40 np0005596060 systemd[1]: Started dnf makecache --timer.
Jan 26 11:50:40 np0005596060 systemd[1]: Started Daily rotation of log files.
Jan 26 11:50:40 np0005596060 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 26 11:50:40 np0005596060 systemd[1]: Reached target Timer Units.
Jan 26 11:50:40 np0005596060 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 26 11:50:40 np0005596060 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 26 11:50:40 np0005596060 systemd[1]: Reached target Socket Units.
Jan 26 11:50:40 np0005596060 systemd[1]: Starting D-Bus System Message Bus...
Jan 26 11:50:40 np0005596060 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 26 11:50:40 np0005596060 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Load Kernel Module configfs...
Jan 26 11:50:40 np0005596060 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Load Kernel Module configfs.
Jan 26 11:50:40 np0005596060 systemd-udevd[740]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 11:50:40 np0005596060 systemd[1]: Started D-Bus System Message Bus.
Jan 26 11:50:40 np0005596060 systemd[1]: Reached target Basic System.
Jan 26 11:50:40 np0005596060 dbus-broker-lau[768]: Ready
Jan 26 11:50:40 np0005596060 systemd[1]: Starting NTP client/server...
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 26 11:50:40 np0005596060 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 26 11:50:40 np0005596060 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 26 11:50:40 np0005596060 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 26 11:50:40 np0005596060 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 26 11:50:40 np0005596060 systemd[1]: Starting IPv4 firewall with iptables...
Jan 26 11:50:40 np0005596060 systemd[1]: Started irqbalance daemon.
Jan 26 11:50:40 np0005596060 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 26 11:50:40 np0005596060 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 11:50:40 np0005596060 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 11:50:40 np0005596060 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 11:50:40 np0005596060 systemd[1]: Reached target sshd-keygen.target.
Jan 26 11:50:40 np0005596060 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 26 11:50:40 np0005596060 systemd[1]: Reached target User and Group Name Lookups.
Jan 26 11:50:40 np0005596060 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 26 11:50:40 np0005596060 systemd[1]: Starting User Login Management...
Jan 26 11:50:40 np0005596060 chronyd[789]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 26 11:50:40 np0005596060 chronyd[789]: Loaded 0 symmetric keys
Jan 26 11:50:40 np0005596060 chronyd[789]: Using right/UTC timezone to obtain leap second data
Jan 26 11:50:40 np0005596060 chronyd[789]: Loaded seccomp filter (level 2)
Jan 26 11:50:40 np0005596060 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 26 11:50:40 np0005596060 systemd[1]: Started NTP client/server.
Jan 26 11:50:40 np0005596060 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 26 11:50:40 np0005596060 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 26 11:50:40 np0005596060 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 26 11:50:40 np0005596060 systemd-logind[786]: New seat seat0.
Jan 26 11:50:40 np0005596060 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 26 11:50:40 np0005596060 systemd[1]: Started User Login Management.
Jan 26 11:50:40 np0005596060 kernel: kvm_amd: TSC scaling supported
Jan 26 11:50:40 np0005596060 kernel: kvm_amd: Nested Virtualization enabled
Jan 26 11:50:40 np0005596060 kernel: kvm_amd: Nested Paging enabled
Jan 26 11:50:40 np0005596060 kernel: kvm_amd: LBR virtualization supported
Jan 26 11:50:40 np0005596060 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 26 11:50:41 np0005596060 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 26 11:50:41 np0005596060 kernel: Console: switching to colour dummy device 80x25
Jan 26 11:50:41 np0005596060 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 26 11:50:41 np0005596060 kernel: [drm] features: -context_init
Jan 26 11:50:41 np0005596060 kernel: [drm] number of scanouts: 1
Jan 26 11:50:41 np0005596060 kernel: [drm] number of cap sets: 0
Jan 26 11:50:41 np0005596060 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 26 11:50:41 np0005596060 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 26 11:50:41 np0005596060 kernel: Console: switching to colour frame buffer device 128x48
Jan 26 11:50:41 np0005596060 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 26 11:50:41 np0005596060 iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Jan 26 11:50:41 np0005596060 systemd[1]: Finished IPv4 firewall with iptables.
Jan 26 11:50:41 np0005596060 cloud-init[840]: Cloud-init v. 24.4-8.el9 running 'init-local' at Mon, 26 Jan 2026 16:50:41 +0000. Up 5.84 seconds.
Jan 26 11:50:41 np0005596060 systemd[1]: run-cloud\x2dinit-tmp-tmpzapbsbgc.mount: Deactivated successfully.
Jan 26 11:50:41 np0005596060 systemd[1]: Starting Hostname Service...
Jan 26 11:50:41 np0005596060 systemd[1]: Started Hostname Service.
Jan 26 11:50:41 np0005596060 systemd-hostnamed[854]: Hostname set to <np0005596060.novalocal> (static)
Jan 26 11:50:41 np0005596060 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 26 11:50:41 np0005596060 systemd[1]: Reached target Preparation for Network.
Jan 26 11:50:41 np0005596060 systemd[1]: Starting Network Manager...
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.6939] NetworkManager (version 1.54.3-2.el9) is starting... (boot:529c9622-ee15-4af2-bc4b-6a5a72e6844b)
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.6946] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7032] manager[0x560fa70a6000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7073] hostname: hostname: using hostnamed
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7073] hostname: static hostname changed from (none) to "np0005596060.novalocal"
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7079] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7203] manager[0x560fa70a6000]: rfkill: Wi-Fi hardware radio set enabled
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7204] manager[0x560fa70a6000]: rfkill: WWAN hardware radio set enabled
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7249] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 26 11:50:41 np0005596060 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7255] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7256] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7256] manager: Networking is enabled by state file
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7258] settings: Loaded settings plugin: keyfile (internal)
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7285] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7308] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7319] dhcp: init: Using DHCP client 'internal'
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7322] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7338] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7347] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7356] device (lo): Activation: starting connection 'lo' (8d71b0e0-2bcd-4be2-b29f-3f05a483f058)
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7366] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7369] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7398] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7402] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7405] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7407] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7409] device (eth0): carrier: link connected
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7413] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7419] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7425] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7429] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7430] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7433] manager: NetworkManager state is now CONNECTING
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7434] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 11:50:41 np0005596060 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7441] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7444] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 11:50:41 np0005596060 systemd[1]: Started Network Manager.
Jan 26 11:50:41 np0005596060 systemd[1]: Reached target Network.
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7485] dhcp4 (eth0): state changed new lease, address=38.129.56.171
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7492] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 26 11:50:41 np0005596060 systemd[1]: Starting Network Manager Wait Online...
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7508] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 11:50:41 np0005596060 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 26 11:50:41 np0005596060 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7605] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7609] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7612] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7621] device (lo): Activation: successful, device activated.
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7629] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7634] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7637] device (eth0): Activation: successful, device activated.
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7642] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 26 11:50:41 np0005596060 NetworkManager[858]: <info>  [1769446241.7646] manager: startup complete
Jan 26 11:50:41 np0005596060 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 26 11:50:41 np0005596060 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 26 11:50:41 np0005596060 systemd[1]: Reached target NFS client services.
Jan 26 11:50:41 np0005596060 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 26 11:50:41 np0005596060 systemd[1]: Reached target Remote File Systems.
Jan 26 11:50:41 np0005596060 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 26 11:50:41 np0005596060 systemd[1]: Finished Network Manager Wait Online.
Jan 26 11:50:41 np0005596060 systemd[1]: Starting Cloud-init: Network Stage...
Jan 26 11:50:42 np0005596060 cloud-init[922]: Cloud-init v. 24.4-8.el9 running 'init' at Mon, 26 Jan 2026 16:50:42 +0000. Up 6.73 seconds.
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: |  eth0  | True |        38.129.56.171         | 255.255.255.0 | global | fa:16:3e:8b:98:18 |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe8b:9818/64 |       .       |  link  | fa:16:3e:8b:98:18 |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info++++++++++++++++++++++++++++++++
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: | Route |   Destination   |   Gateway   |     Genmask     | Interface | Flags |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: |   0   |     0.0.0.0     | 38.129.56.1 |     0.0.0.0     |    eth0   |   UG  |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: |   1   |   38.129.56.0   |   0.0.0.0   |  255.255.255.0  |    eth0   |   U   |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.129.56.5 | 255.255.255.255 |    eth0   |  UGH  |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +-------+-----------------+-------------+-----------------+-----------+-------+
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 26 11:50:42 np0005596060 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 26 11:50:44 np0005596060 cloud-init[922]: Generating public/private rsa key pair.
Jan 26 11:50:44 np0005596060 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 26 11:50:44 np0005596060 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 26 11:50:44 np0005596060 cloud-init[922]: The key fingerprint is:
Jan 26 11:50:44 np0005596060 cloud-init[922]: SHA256:K8Dd39nVLS1vx7PiaVoTmK4srhQByCtRpQTnJoHhZT0 root@np0005596060.novalocal
Jan 26 11:50:44 np0005596060 cloud-init[922]: The key's randomart image is:
Jan 26 11:50:44 np0005596060 cloud-init[922]: +---[RSA 3072]----+
Jan 26 11:50:44 np0005596060 cloud-init[922]: |*+==o            |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |+Bo..E           |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |.o=  ..          |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |.+ . ...    o  .o|
Jan 26 11:50:44 np0005596060 cloud-init[922]: |.   o.. S  o .o =|
Jan 26 11:50:44 np0005596060 cloud-init[922]: |     ..  o.. o.* |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |     .. . ..oo..=|
Jan 26 11:50:44 np0005596060 cloud-init[922]: |    .  o. . .oo.+|
Jan 26 11:50:44 np0005596060 cloud-init[922]: |     .o..o .+o.. |
Jan 26 11:50:44 np0005596060 cloud-init[922]: +----[SHA256]-----+
Jan 26 11:50:44 np0005596060 cloud-init[922]: Generating public/private ecdsa key pair.
Jan 26 11:50:44 np0005596060 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 26 11:50:44 np0005596060 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 26 11:50:44 np0005596060 cloud-init[922]: The key fingerprint is:
Jan 26 11:50:44 np0005596060 cloud-init[922]: SHA256:iPlQueQdIP9v6PSjQughAQg/n+3j17in0nfFtxY4I9w root@np0005596060.novalocal
Jan 26 11:50:44 np0005596060 cloud-init[922]: The key's randomart image is:
Jan 26 11:50:44 np0005596060 cloud-init[922]: +---[ECDSA 256]---+
Jan 26 11:50:44 np0005596060 cloud-init[922]: |+   . .          |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |o.   o o         |
Jan 26 11:50:44 np0005596060 cloud-init[922]: | .o   = .        |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |  .o O = .       |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |   .*.= S . ...  |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |  . o+.  o o Eo..|
Jan 26 11:50:44 np0005596060 cloud-init[922]: |   o o+.ooo ..o.o|
Jan 26 11:50:44 np0005596060 cloud-init[922]: |    ..o++o= .  o |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |      .+=*.o  .  |
Jan 26 11:50:44 np0005596060 cloud-init[922]: +----[SHA256]-----+
Jan 26 11:50:44 np0005596060 cloud-init[922]: Generating public/private ed25519 key pair.
Jan 26 11:50:44 np0005596060 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 26 11:50:44 np0005596060 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 26 11:50:44 np0005596060 cloud-init[922]: The key fingerprint is:
Jan 26 11:50:44 np0005596060 cloud-init[922]: SHA256:TM7VHXnoxCKI+EBWSiMXeXlisp2H9yxfwA0NI3VlQJo root@np0005596060.novalocal
Jan 26 11:50:44 np0005596060 cloud-init[922]: The key's randomart image is:
Jan 26 11:50:44 np0005596060 cloud-init[922]: +--[ED25519 256]--+
Jan 26 11:50:44 np0005596060 cloud-init[922]: |  ..B=.o.o+++++o |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |   ==o* o.o*oo=..|
Jan 26 11:50:44 np0005596060 cloud-init[922]: |    .O =..E+.+.. |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |    . ==o.o . .  |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |       oSo .     |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |        . o .    |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |         o .     |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |          .      |
Jan 26 11:50:44 np0005596060 cloud-init[922]: |                 |
Jan 26 11:50:44 np0005596060 cloud-init[922]: +----[SHA256]-----+
Jan 26 11:50:44 np0005596060 systemd[1]: Finished Cloud-init: Network Stage.
Jan 26 11:50:44 np0005596060 systemd[1]: Reached target Cloud-config availability.
Jan 26 11:50:44 np0005596060 systemd[1]: Reached target Network is Online.
Jan 26 11:50:44 np0005596060 systemd[1]: Starting Cloud-init: Config Stage...
Jan 26 11:50:44 np0005596060 systemd[1]: Starting Crash recovery kernel arming...
Jan 26 11:50:44 np0005596060 systemd[1]: Starting Notify NFS peers of a restart...
Jan 26 11:50:44 np0005596060 systemd[1]: Starting System Logging Service...
Jan 26 11:50:44 np0005596060 sm-notify[1004]: Version 2.5.4 starting
Jan 26 11:50:44 np0005596060 systemd[1]: Starting OpenSSH server daemon...
Jan 26 11:50:44 np0005596060 systemd[1]: Starting Permit User Sessions...
Jan 26 11:50:44 np0005596060 systemd[1]: Started Notify NFS peers of a restart.
Jan 26 11:50:44 np0005596060 systemd[1]: Finished Permit User Sessions.
Jan 26 11:50:44 np0005596060 systemd[1]: Started OpenSSH server daemon.
Jan 26 11:50:44 np0005596060 systemd[1]: Started Command Scheduler.
Jan 26 11:50:44 np0005596060 systemd[1]: Started Getty on tty1.
Jan 26 11:50:44 np0005596060 systemd[1]: Started Serial Getty on ttyS0.
Jan 26 11:50:44 np0005596060 systemd[1]: Reached target Login Prompts.
Jan 26 11:50:44 np0005596060 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Jan 26 11:50:44 np0005596060 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 26 11:50:44 np0005596060 systemd[1]: Started System Logging Service.
Jan 26 11:50:44 np0005596060 systemd[1]: Reached target Multi-User System.
Jan 26 11:50:44 np0005596060 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 26 11:50:44 np0005596060 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 26 11:50:44 np0005596060 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 26 11:50:44 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 11:50:45 np0005596060 kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Jan 26 11:50:45 np0005596060 kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 26 11:50:45 np0005596060 cloud-init[1144]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Mon, 26 Jan 2026 16:50:45 +0000. Up 9.73 seconds.
Jan 26 11:50:45 np0005596060 systemd[1]: Finished Cloud-init: Config Stage.
Jan 26 11:50:45 np0005596060 systemd[1]: Starting Cloud-init: Final Stage...
Jan 26 11:50:45 np0005596060 dracut[1265]: dracut-057-102.git20250818.el9
Jan 26 11:50:45 np0005596060 dracut[1267]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 26 11:50:45 np0005596060 cloud-init[1304]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Mon, 26 Jan 2026 16:50:45 +0000. Up 10.11 seconds.
Jan 26 11:50:45 np0005596060 cloud-init[1337]: #############################################################
Jan 26 11:50:45 np0005596060 cloud-init[1338]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 26 11:50:45 np0005596060 cloud-init[1340]: 256 SHA256:iPlQueQdIP9v6PSjQughAQg/n+3j17in0nfFtxY4I9w root@np0005596060.novalocal (ECDSA)
Jan 26 11:50:45 np0005596060 cloud-init[1342]: 256 SHA256:TM7VHXnoxCKI+EBWSiMXeXlisp2H9yxfwA0NI3VlQJo root@np0005596060.novalocal (ED25519)
Jan 26 11:50:45 np0005596060 cloud-init[1346]: 3072 SHA256:K8Dd39nVLS1vx7PiaVoTmK4srhQByCtRpQTnJoHhZT0 root@np0005596060.novalocal (RSA)
Jan 26 11:50:45 np0005596060 cloud-init[1348]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 26 11:50:45 np0005596060 cloud-init[1349]: #############################################################
Jan 26 11:50:45 np0005596060 cloud-init[1304]: Cloud-init v. 24.4-8.el9 finished at Mon, 26 Jan 2026 16:50:45 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.28 seconds
Jan 26 11:50:45 np0005596060 systemd[1]: Finished Cloud-init: Final Stage.
Jan 26 11:50:45 np0005596060 systemd[1]: Reached target Cloud-init target.
Jan 26 11:50:45 np0005596060 dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 26 11:50:45 np0005596060 dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 26 11:50:45 np0005596060 dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 26 11:50:45 np0005596060 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 26 11:50:45 np0005596060 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 26 11:50:45 np0005596060 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: memstrack is not available
Jan 26 11:50:46 np0005596060 dracut[1267]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 26 11:50:46 np0005596060 chronyd[789]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Jan 26 11:50:46 np0005596060 chronyd[789]: System clock TAI offset set to 37 seconds
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 26 11:50:46 np0005596060 dracut[1267]: memstrack is not available
Jan 26 11:50:46 np0005596060 dracut[1267]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 26 11:50:46 np0005596060 dracut[1267]: *** Including module: systemd ***
Jan 26 11:50:47 np0005596060 dracut[1267]: *** Including module: fips ***
Jan 26 11:50:47 np0005596060 dracut[1267]: *** Including module: systemd-initrd ***
Jan 26 11:50:47 np0005596060 dracut[1267]: *** Including module: i18n ***
Jan 26 11:50:47 np0005596060 dracut[1267]: *** Including module: drm ***
Jan 26 11:50:47 np0005596060 dracut[1267]: *** Including module: prefixdevname ***
Jan 26 11:50:47 np0005596060 dracut[1267]: *** Including module: kernel-modules ***
Jan 26 11:50:48 np0005596060 kernel: block vda: the capability attribute has been deprecated.
Jan 26 11:50:48 np0005596060 dracut[1267]: *** Including module: kernel-modules-extra ***
Jan 26 11:50:48 np0005596060 dracut[1267]: *** Including module: qemu ***
Jan 26 11:50:48 np0005596060 dracut[1267]: *** Including module: fstab-sys ***
Jan 26 11:50:48 np0005596060 dracut[1267]: *** Including module: rootfs-block ***
Jan 26 11:50:48 np0005596060 dracut[1267]: *** Including module: terminfo ***
Jan 26 11:50:48 np0005596060 dracut[1267]: *** Including module: udev-rules ***
Jan 26 11:50:49 np0005596060 dracut[1267]: Skipping udev rule: 91-permissions.rules
Jan 26 11:50:49 np0005596060 dracut[1267]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 26 11:50:49 np0005596060 dracut[1267]: *** Including module: virtiofs ***
Jan 26 11:50:49 np0005596060 dracut[1267]: *** Including module: dracut-systemd ***
Jan 26 11:50:49 np0005596060 dracut[1267]: *** Including module: usrmount ***
Jan 26 11:50:49 np0005596060 dracut[1267]: *** Including module: base ***
Jan 26 11:50:50 np0005596060 dracut[1267]: *** Including module: fs-lib ***
Jan 26 11:50:50 np0005596060 dracut[1267]: *** Including module: kdumpbase ***
Jan 26 11:50:50 np0005596060 dracut[1267]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 26 11:50:50 np0005596060 dracut[1267]:  microcode_ctl module: mangling fw_dir
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: configuration "intel" is ignored
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 26 11:50:50 np0005596060 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 26 11:50:51 np0005596060 dracut[1267]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 26 11:50:51 np0005596060 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 26 11:50:51 np0005596060 dracut[1267]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 26 11:50:51 np0005596060 dracut[1267]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 26 11:50:51 np0005596060 dracut[1267]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 26 11:50:51 np0005596060 dracut[1267]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 26 11:50:51 np0005596060 dracut[1267]: *** Including module: openssl ***
Jan 26 11:50:51 np0005596060 dracut[1267]: *** Including module: shutdown ***
Jan 26 11:50:51 np0005596060 dracut[1267]: *** Including module: squash ***
Jan 26 11:50:51 np0005596060 dracut[1267]: *** Including modules done ***
Jan 26 11:50:51 np0005596060 dracut[1267]: *** Installing kernel module dependencies ***
Jan 26 11:50:51 np0005596060 irqbalance[781]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 26 11:50:51 np0005596060 irqbalance[781]: IRQ 25 affinity is now unmanaged
Jan 26 11:50:51 np0005596060 irqbalance[781]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 26 11:50:51 np0005596060 irqbalance[781]: IRQ 31 affinity is now unmanaged
Jan 26 11:50:51 np0005596060 irqbalance[781]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 26 11:50:51 np0005596060 irqbalance[781]: IRQ 28 affinity is now unmanaged
Jan 26 11:50:51 np0005596060 irqbalance[781]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 26 11:50:51 np0005596060 irqbalance[781]: IRQ 32 affinity is now unmanaged
Jan 26 11:50:51 np0005596060 irqbalance[781]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 26 11:50:51 np0005596060 irqbalance[781]: IRQ 30 affinity is now unmanaged
Jan 26 11:50:51 np0005596060 irqbalance[781]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 26 11:50:51 np0005596060 irqbalance[781]: IRQ 29 affinity is now unmanaged
Jan 26 11:50:51 np0005596060 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 11:50:52 np0005596060 dracut[1267]: *** Installing kernel module dependencies done ***
Jan 26 11:50:52 np0005596060 dracut[1267]: *** Resolving executable dependencies ***
Jan 26 11:50:53 np0005596060 dracut[1267]: *** Resolving executable dependencies done ***
Jan 26 11:50:53 np0005596060 dracut[1267]: *** Generating early-microcode cpio image ***
Jan 26 11:50:53 np0005596060 dracut[1267]: *** Store current command line parameters ***
Jan 26 11:50:53 np0005596060 dracut[1267]: Stored kernel commandline:
Jan 26 11:50:53 np0005596060 dracut[1267]: No dracut internal kernel commandline stored in the initramfs
Jan 26 11:50:53 np0005596060 dracut[1267]: *** Install squash loader ***
Jan 26 11:50:54 np0005596060 dracut[1267]: *** Squashing the files inside the initramfs ***
Jan 26 11:50:55 np0005596060 dracut[1267]: *** Squashing the files inside the initramfs done ***
Jan 26 11:50:55 np0005596060 dracut[1267]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 26 11:50:55 np0005596060 dracut[1267]: *** Hardlinking files ***
Jan 26 11:50:55 np0005596060 dracut[1267]: *** Hardlinking files done ***
Jan 26 11:50:55 np0005596060 dracut[1267]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 26 11:50:56 np0005596060 kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Jan 26 11:50:56 np0005596060 kdumpctl[1017]: kdump: Starting kdump: [OK]
Jan 26 11:50:56 np0005596060 systemd[1]: Finished Crash recovery kernel arming.
Jan 26 11:50:56 np0005596060 systemd[1]: Startup finished in 1.519s (kernel) + 2.406s (initrd) + 17.061s (userspace) = 20.987s.
Jan 26 11:51:11 np0005596060 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 12:00:14 np0005596060 systemd[1]: Created slice User Slice of UID 1000.
Jan 26 12:00:14 np0005596060 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 26 12:00:14 np0005596060 systemd-logind[786]: New session 1 of user zuul.
Jan 26 12:00:14 np0005596060 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 26 12:00:14 np0005596060 systemd[1]: Starting User Manager for UID 1000...
Jan 26 12:00:14 np0005596060 systemd[4310]: Queued start job for default target Main User Target.
Jan 26 12:00:14 np0005596060 systemd[4310]: Created slice User Application Slice.
Jan 26 12:00:14 np0005596060 systemd[4310]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 26 12:00:14 np0005596060 systemd[4310]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 12:00:14 np0005596060 systemd[4310]: Reached target Paths.
Jan 26 12:00:14 np0005596060 systemd[4310]: Reached target Timers.
Jan 26 12:00:14 np0005596060 systemd[4310]: Starting D-Bus User Message Bus Socket...
Jan 26 12:00:14 np0005596060 systemd[4310]: Starting Create User's Volatile Files and Directories...
Jan 26 12:00:14 np0005596060 systemd[4310]: Listening on D-Bus User Message Bus Socket.
Jan 26 12:00:14 np0005596060 systemd[4310]: Reached target Sockets.
Jan 26 12:00:14 np0005596060 systemd[4310]: Finished Create User's Volatile Files and Directories.
Jan 26 12:00:14 np0005596060 systemd[4310]: Reached target Basic System.
Jan 26 12:00:14 np0005596060 systemd[4310]: Reached target Main User Target.
Jan 26 12:00:14 np0005596060 systemd[4310]: Startup finished in 107ms.
Jan 26 12:00:14 np0005596060 systemd[1]: Started User Manager for UID 1000.
Jan 26 12:00:14 np0005596060 systemd[1]: Started Session 1 of User zuul.
Jan 26 12:00:14 np0005596060 python3[4392]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:00:17 np0005596060 python3[4420]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:00:26 np0005596060 python3[4478]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:00:26 np0005596060 python3[4518]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 26 12:00:29 np0005596060 python3[4544]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzdr5ekFaPx5lHCHhmmyyey8qVCMeTV+mIzJTvZGts8fWfChGIy6y7YCGlmkytpqPA07Fdi16KsS1gXNTGiDaGesXpaNaE+VxEl1z2rMUDI2agXur5kwAnnLX6ecRHowjHbfU1zfjLXFqAMHYc0aCPRCp060fLIuO4nlwJ3GWq0ye5H1ZVELwGDayCuDWzbK5aHDztQdNJDgdy9OPuZ8b+K9F7fbWU1Z+dBU7m5IN5KjKFd/cPNSHsK6ON+/Sfi4qtk8jBXQYpM1BizgXu33re8tOhjys5ZQoV9DYya4bJkXiff+Ruz4U28Pu9uh4FkhbYSpG9Y1LTnlG2kGmI4atVVR7gSRZv/2LznHdwcFRHyX7kKVFwYvWMjYumEpe5bfQIF9XXoeFhFEMeEpl3jwZGQKFDFakCMaU4DYm0kDhjP3TXPwc1qih9KawhQ/+M5yhHRmTfFnaue4dl4qdaYLxvciw6hzU/3xhhgXvi22OXk3iReBOKJZxM0/S5k0VAG2c= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:30 np0005596060 python3[4568]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:30 np0005596060 python3[4667]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:00:30 np0005596060 python3[4738]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769446830.3534718-251-189057608657480/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=80829f0cc97a4a10a8c6e238c5ab9a25_id_rsa follow=False checksum=6709c318edb1fd99be951f08f8e495e1e3755a4f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:31 np0005596060 python3[4861]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:00:31 np0005596060 python3[4932]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769446831.3234951-306-131056667749325/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=80829f0cc97a4a10a8c6e238c5ab9a25_id_rsa.pub follow=False checksum=a347889044a62ef15060bc27e1ab0fab9aa4e666 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:33 np0005596060 python3[4980]: ansible-ping Invoked with data=pong
Jan 26 12:00:34 np0005596060 python3[5004]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:00:36 np0005596060 python3[5062]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 26 12:00:37 np0005596060 python3[5094]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:38 np0005596060 python3[5118]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:38 np0005596060 python3[5142]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:38 np0005596060 python3[5166]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:38 np0005596060 python3[5190]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:39 np0005596060 python3[5214]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:40 np0005596060 python3[5240]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:41 np0005596060 python3[5318]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:00:41 np0005596060 python3[5392]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769446840.9706695-31-27282802478227/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:42 np0005596060 python3[5440]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:42 np0005596060 python3[5464]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:43 np0005596060 python3[5488]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:43 np0005596060 python3[5512]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:43 np0005596060 python3[5536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:43 np0005596060 python3[5560]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:44 np0005596060 python3[5584]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:44 np0005596060 python3[5608]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:44 np0005596060 python3[5632]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:44 np0005596060 python3[5656]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:45 np0005596060 python3[5680]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:45 np0005596060 python3[5704]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:45 np0005596060 python3[5728]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:45 np0005596060 python3[5752]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:46 np0005596060 python3[5776]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:46 np0005596060 python3[5800]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:46 np0005596060 python3[5824]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:47 np0005596060 python3[5848]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:47 np0005596060 python3[5872]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:47 np0005596060 python3[5896]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:47 np0005596060 python3[5920]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:48 np0005596060 python3[5944]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:48 np0005596060 python3[5968]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:48 np0005596060 python3[5992]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:48 np0005596060 python3[6016]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:49 np0005596060 python3[6040]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:00:52 np0005596060 python3[6067]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 26 12:00:52 np0005596060 systemd[1]: Starting Time & Date Service...
Jan 26 12:00:52 np0005596060 systemd[1]: Started Time & Date Service.
Jan 26 12:00:52 np0005596060 systemd-timedated[6069]: Changed time zone to 'UTC' (UTC).
Jan 26 12:00:52 np0005596060 python3[6098]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:53 np0005596060 python3[6174]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:00:53 np0005596060 python3[6245]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769446852.777854-251-36712539946556/source _original_basename=tmpoeg241o1 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:54 np0005596060 python3[6345]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:00:54 np0005596060 python3[6416]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769446854.2831738-301-17225515364572/source _original_basename=tmpxiv2s786 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:55 np0005596060 python3[6518]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:00:55 np0005596060 python3[6591]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769446855.3056679-381-276974788100510/source _original_basename=tmpxyt4df_d follow=False checksum=e1880ae41521d780ec763a04a57766b6739f674d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:56 np0005596060 python3[6639]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:00:56 np0005596060 python3[6665]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:00:57 np0005596060 python3[6745]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:00:57 np0005596060 python3[6818]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769446856.9427876-451-124667893268821/source _original_basename=tmpgbiaedw4 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:00:58 np0005596060 python3[6869]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-d7b6-8e1f-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:00:58 np0005596060 python3[6897]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-d7b6-8e1f-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 26 12:01:00 np0005596060 python3[6925]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:01:22 np0005596060 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 26 12:01:22 np0005596060 python3[6968]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:02:04 np0005596060 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 26 12:02:04 np0005596060 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 26 12:02:04 np0005596060 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 26 12:02:04 np0005596060 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 26 12:02:04 np0005596060 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 26 12:02:04 np0005596060 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 26 12:02:04 np0005596060 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 26 12:02:04 np0005596060 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 26 12:02:04 np0005596060 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 26 12:02:04 np0005596060 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4738] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 26 12:02:04 np0005596060 systemd-udevd[6970]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4919] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4943] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4945] device (eth1): carrier: link connected
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4947] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4951] policy: auto-activating connection 'Wired connection 1' (811d519c-2a87-337a-ad17-b13888f5f045)
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4954] device (eth1): Activation: starting connection 'Wired connection 1' (811d519c-2a87-337a-ad17-b13888f5f045)
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4955] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4957] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4960] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:02:04 np0005596060 NetworkManager[858]: <info>  [1769446924.4964] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 26 12:02:05 np0005596060 python3[6997]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-4b93-3762-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:02:15 np0005596060 python3[7077]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:02:15 np0005596060 python3[7150]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769446935.3433974-104-171353472663820/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=2f70efda15b55c3794b4debc796e6db8ed169716 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:02:16 np0005596060 python3[7200]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:02:16 np0005596060 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 26 12:02:16 np0005596060 systemd[1]: Stopped Network Manager Wait Online.
Jan 26 12:02:16 np0005596060 systemd[1]: Stopping Network Manager Wait Online...
Jan 26 12:02:16 np0005596060 systemd[1]: Stopping Network Manager...
Jan 26 12:02:16 np0005596060 NetworkManager[858]: <info>  [1769446936.7578] caught SIGTERM, shutting down normally.
Jan 26 12:02:16 np0005596060 NetworkManager[858]: <info>  [1769446936.7586] dhcp4 (eth0): canceled DHCP transaction
Jan 26 12:02:16 np0005596060 NetworkManager[858]: <info>  [1769446936.7586] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 12:02:16 np0005596060 NetworkManager[858]: <info>  [1769446936.7586] dhcp4 (eth0): state changed no lease
Jan 26 12:02:16 np0005596060 NetworkManager[858]: <info>  [1769446936.7588] manager: NetworkManager state is now CONNECTING
Jan 26 12:02:16 np0005596060 NetworkManager[858]: <info>  [1769446936.7652] dhcp4 (eth1): canceled DHCP transaction
Jan 26 12:02:16 np0005596060 NetworkManager[858]: <info>  [1769446936.7652] dhcp4 (eth1): state changed no lease
Jan 26 12:02:16 np0005596060 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 12:02:16 np0005596060 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 12:02:16 np0005596060 NetworkManager[858]: <info>  [1769446936.8984] exiting (success)
Jan 26 12:02:16 np0005596060 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 26 12:02:16 np0005596060 systemd[1]: Stopped Network Manager.
Jan 26 12:02:16 np0005596060 systemd[1]: NetworkManager.service: Consumed 4.447s CPU time, 10.0M memory peak.
Jan 26 12:02:16 np0005596060 systemd[1]: Starting Network Manager...
Jan 26 12:02:16 np0005596060 NetworkManager[7217]: <info>  [1769446936.9441] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:529c9622-ee15-4af2-bc4b-6a5a72e6844b)
Jan 26 12:02:16 np0005596060 NetworkManager[7217]: <info>  [1769446936.9444] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 26 12:02:16 np0005596060 NetworkManager[7217]: <info>  [1769446936.9492] manager[0x55f828763000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 26 12:02:16 np0005596060 systemd[1]: Starting Hostname Service...
Jan 26 12:02:17 np0005596060 systemd[1]: Started Hostname Service.
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0365] hostname: hostname: using hostnamed
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0369] hostname: static hostname changed from (none) to "np0005596060.novalocal"
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0376] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0383] manager[0x55f828763000]: rfkill: Wi-Fi hardware radio set enabled
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0383] manager[0x55f828763000]: rfkill: WWAN hardware radio set enabled
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0418] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0418] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0419] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0419] manager: Networking is enabled by state file
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0422] settings: Loaded settings plugin: keyfile (internal)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0427] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0457] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0468] dhcp: init: Using DHCP client 'internal'
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0471] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0477] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0483] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0491] device (lo): Activation: starting connection 'lo' (8d71b0e0-2bcd-4be2-b29f-3f05a483f058)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0498] device (eth0): carrier: link connected
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0503] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0508] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0508] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0515] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0522] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0529] device (eth1): carrier: link connected
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0533] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0540] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (811d519c-2a87-337a-ad17-b13888f5f045) (indicated)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0540] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0547] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0555] device (eth1): Activation: starting connection 'Wired connection 1' (811d519c-2a87-337a-ad17-b13888f5f045)
Jan 26 12:02:17 np0005596060 systemd[1]: Started Network Manager.
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0562] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0571] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0574] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0576] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0579] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0581] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0584] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0596] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0601] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0610] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0614] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0624] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0627] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0648] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0650] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0657] device (lo): Activation: successful, device activated.
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0666] dhcp4 (eth0): state changed new lease, address=38.129.56.171
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.0673] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 26 12:02:17 np0005596060 systemd[1]: Starting Network Manager Wait Online...
Jan 26 12:02:17 np0005596060 python3[7266]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-4b93-3762-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.3835] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.5452] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.5455] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.5463] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.5469] device (eth0): Activation: successful, device activated.
Jan 26 12:02:17 np0005596060 NetworkManager[7217]: <info>  [1769446937.5479] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 26 12:02:24 np0005596060 systemd[4310]: Starting Mark boot as successful...
Jan 26 12:02:24 np0005596060 systemd[4310]: Finished Mark boot as successful.
Jan 26 12:02:27 np0005596060 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 12:02:47 np0005596060 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.3635] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 12:03:02 np0005596060 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 12:03:02 np0005596060 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.3876] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.3878] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.3887] device (eth1): Activation: successful, device activated.
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.3894] manager: startup complete
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.3897] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <warn>  [1769446982.3906] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.3913] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 26 12:03:02 np0005596060 systemd[1]: Finished Network Manager Wait Online.
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4043] dhcp4 (eth1): canceled DHCP transaction
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4045] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4046] dhcp4 (eth1): state changed no lease
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4061] policy: auto-activating connection 'ci-private-network' (132143e7-a9aa-5379-9229-ae1e0c5a0fb3)
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4065] device (eth1): Activation: starting connection 'ci-private-network' (132143e7-a9aa-5379-9229-ae1e0c5a0fb3)
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4066] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4068] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4075] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4084] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4138] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4140] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:03:02 np0005596060 NetworkManager[7217]: <info>  [1769446982.4145] device (eth1): Activation: successful, device activated.
Jan 26 12:03:12 np0005596060 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 12:03:17 np0005596060 systemd-logind[786]: Session 1 logged out. Waiting for processes to exit.
Jan 26 12:04:19 np0005596060 systemd-logind[786]: New session 3 of user zuul.
Jan 26 12:04:19 np0005596060 systemd[1]: Started Session 3 of User zuul.
Jan 26 12:04:19 np0005596060 python3[7402]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:04:20 np0005596060 python3[7475]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769447059.5612037-373-12167771625630/source _original_basename=tmpp2tfrdts follow=False checksum=ff12fd3b26ba1169babfae82900b18cee99f46f8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:04:24 np0005596060 systemd[1]: session-3.scope: Deactivated successfully.
Jan 26 12:04:24 np0005596060 systemd-logind[786]: Session 3 logged out. Waiting for processes to exit.
Jan 26 12:04:24 np0005596060 systemd-logind[786]: Removed session 3.
Jan 26 12:05:24 np0005596060 systemd[4310]: Created slice User Background Tasks Slice.
Jan 26 12:05:24 np0005596060 systemd[4310]: Starting Cleanup of User's Temporary Files and Directories...
Jan 26 12:05:24 np0005596060 systemd[4310]: Finished Cleanup of User's Temporary Files and Directories.
Jan 26 12:06:24 np0005596060 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 26 12:06:24 np0005596060 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 26 12:06:24 np0005596060 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 26 12:06:24 np0005596060 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 26 12:10:24 np0005596060 systemd-logind[786]: New session 4 of user zuul.
Jan 26 12:10:24 np0005596060 systemd[1]: Started Session 4 of User zuul.
Jan 26 12:10:24 np0005596060 python3[7537]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-1288-0b10-000000000caa-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:10:25 np0005596060 python3[7566]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:10:25 np0005596060 python3[7592]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:10:25 np0005596060 python3[7618]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:10:25 np0005596060 python3[7644]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:10:26 np0005596060 python3[7670]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:10:26 np0005596060 python3[7748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:10:27 np0005596060 python3[7821]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769447426.633823-367-245014529887161/source _original_basename=tmpfwlcefes follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:10:28 np0005596060 python3[7871]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 12:10:28 np0005596060 systemd[1]: Reloading.
Jan 26 12:10:28 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:10:30 np0005596060 python3[7927]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 26 12:10:31 np0005596060 python3[7953]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:10:31 np0005596060 python3[7981]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:10:31 np0005596060 python3[8009]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:10:31 np0005596060 python3[8037]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:10:32 np0005596060 python3[8064]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-1288-0b10-000000000cb1-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:10:33 np0005596060 python3[8094]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 12:10:36 np0005596060 systemd[1]: session-4.scope: Deactivated successfully.
Jan 26 12:10:36 np0005596060 systemd[1]: session-4.scope: Consumed 4.022s CPU time.
Jan 26 12:10:36 np0005596060 systemd-logind[786]: Session 4 logged out. Waiting for processes to exit.
Jan 26 12:10:36 np0005596060 systemd-logind[786]: Removed session 4.
Jan 26 12:10:38 np0005596060 systemd-logind[786]: New session 5 of user zuul.
Jan 26 12:10:38 np0005596060 systemd[1]: Started Session 5 of User zuul.
Jan 26 12:10:38 np0005596060 python3[8127]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 26 12:10:47 np0005596060 setsebool[8170]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 26 12:10:47 np0005596060 setsebool[8170]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 26 12:11:04 np0005596060 kernel: SELinux:  Converting 386 SID table entries...
Jan 26 12:11:04 np0005596060 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 12:11:04 np0005596060 kernel: SELinux:  policy capability open_perms=1
Jan 26 12:11:04 np0005596060 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 12:11:04 np0005596060 kernel: SELinux:  policy capability always_check_network=0
Jan 26 12:11:04 np0005596060 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 12:11:04 np0005596060 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 12:11:04 np0005596060 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 12:11:18 np0005596060 kernel: SELinux:  Converting 389 SID table entries...
Jan 26 12:11:18 np0005596060 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 12:11:18 np0005596060 kernel: SELinux:  policy capability open_perms=1
Jan 26 12:11:18 np0005596060 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 12:11:18 np0005596060 kernel: SELinux:  policy capability always_check_network=0
Jan 26 12:11:18 np0005596060 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 12:11:18 np0005596060 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 12:11:18 np0005596060 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 12:11:44 np0005596060 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 26 12:11:44 np0005596060 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 12:11:44 np0005596060 systemd[1]: Starting man-db-cache-update.service...
Jan 26 12:11:44 np0005596060 systemd[1]: Reloading.
Jan 26 12:11:44 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:11:44 np0005596060 systemd[1]: Starting dnf makecache...
Jan 26 12:11:44 np0005596060 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 12:11:45 np0005596060 dnf[8951]: Failed determining last makecache time.
Jan 26 12:11:46 np0005596060 dnf[8951]: CentOS Stream 9 - BaseOS                         36 kB/s | 6.7 kB     00:00
Jan 26 12:11:46 np0005596060 dnf[8951]: CentOS Stream 9 - AppStream                      58 kB/s | 6.8 kB     00:00
Jan 26 12:11:47 np0005596060 dnf[8951]: CentOS Stream 9 - CRB                            29 kB/s | 6.6 kB     00:00
Jan 26 12:11:47 np0005596060 dnf[8951]: CentOS Stream 9 - Extras packages                70 kB/s | 7.3 kB     00:00
Jan 26 12:11:47 np0005596060 dnf[8951]: Metadata cache created.
Jan 26 12:11:47 np0005596060 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 26 12:11:47 np0005596060 systemd[1]: Finished dnf makecache.
Jan 26 12:11:48 np0005596060 python3[11107]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-084f-674e-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:11:50 np0005596060 kernel: evm: overlay not supported
Jan 26 12:11:50 np0005596060 systemd[4310]: Starting D-Bus User Message Bus...
Jan 26 12:11:50 np0005596060 dbus-broker-launch[12222]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 26 12:11:50 np0005596060 dbus-broker-launch[12222]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 26 12:11:50 np0005596060 systemd[4310]: Started D-Bus User Message Bus.
Jan 26 12:11:50 np0005596060 dbus-broker-lau[12222]: Ready
Jan 26 12:11:50 np0005596060 systemd[4310]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 26 12:11:50 np0005596060 systemd[4310]: Created slice Slice /user.
Jan 26 12:11:50 np0005596060 systemd[4310]: podman-12054.scope: unit configures an IP firewall, but not running as root.
Jan 26 12:11:50 np0005596060 systemd[4310]: (This warning is only shown for the first unit using IP firewalling.)
Jan 26 12:11:50 np0005596060 systemd[4310]: Started podman-12054.scope.
Jan 26 12:11:50 np0005596060 systemd[4310]: Started podman-pause-a8a59230.scope.
Jan 26 12:11:51 np0005596060 python3[12841]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.22:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.22:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:11:51 np0005596060 python3[12841]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 26 12:11:51 np0005596060 systemd[1]: session-5.scope: Deactivated successfully.
Jan 26 12:11:51 np0005596060 systemd[1]: session-5.scope: Consumed 52.770s CPU time.
Jan 26 12:11:51 np0005596060 systemd-logind[786]: Session 5 logged out. Waiting for processes to exit.
Jan 26 12:11:51 np0005596060 systemd-logind[786]: Removed session 5.
Jan 26 12:12:21 np0005596060 systemd-logind[786]: New session 6 of user zuul.
Jan 26 12:12:21 np0005596060 systemd[1]: Started Session 6 of User zuul.
Jan 26 12:12:22 np0005596060 python3[25716]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC2KNOrliIHVpRkoYSANCIWkSJEaoIB3ID7izEiG92Sz9ZDLnB8Yf+FcZuIYW5FpyTRAiW5K324Zpl7LaJD12Jw= zuul@np0005596059.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:12:22 np0005596060 python3[25926]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC2KNOrliIHVpRkoYSANCIWkSJEaoIB3ID7izEiG92Sz9ZDLnB8Yf+FcZuIYW5FpyTRAiW5K324Zpl7LaJD12Jw= zuul@np0005596059.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:12:23 np0005596060 python3[26390]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005596060.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 26 12:12:24 np0005596060 python3[26700]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBC2KNOrliIHVpRkoYSANCIWkSJEaoIB3ID7izEiG92Sz9ZDLnB8Yf+FcZuIYW5FpyTRAiW5K324Zpl7LaJD12Jw= zuul@np0005596059.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 26 12:12:24 np0005596060 python3[26989]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:12:25 np0005596060 python3[27288]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769447544.522105-167-135430879168894/source _original_basename=tmpum0j2iaw follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:12:26 np0005596060 python3[27654]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 26 12:12:26 np0005596060 systemd[1]: Starting Hostname Service...
Jan 26 12:12:26 np0005596060 systemd[1]: Started Hostname Service.
Jan 26 12:12:26 np0005596060 systemd-hostnamed[27773]: Changed pretty hostname to 'compute-0'
Jan 26 12:12:26 np0005596060 systemd-hostnamed[27773]: Hostname set to <compute-0> (static)
Jan 26 12:12:26 np0005596060 NetworkManager[7217]: <info>  [1769447546.4052] hostname: static hostname changed from "np0005596060.novalocal" to "compute-0"
Jan 26 12:12:26 np0005596060 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 12:12:26 np0005596060 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 12:12:26 np0005596060 systemd[1]: session-6.scope: Deactivated successfully.
Jan 26 12:12:26 np0005596060 systemd[1]: session-6.scope: Consumed 2.369s CPU time.
Jan 26 12:12:26 np0005596060 systemd-logind[786]: Session 6 logged out. Waiting for processes to exit.
Jan 26 12:12:26 np0005596060 systemd-logind[786]: Removed session 6.
Jan 26 12:12:32 np0005596060 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 12:12:32 np0005596060 systemd[1]: Finished man-db-cache-update.service.
Jan 26 12:12:32 np0005596060 systemd[1]: man-db-cache-update.service: Consumed 53.873s CPU time.
Jan 26 12:12:32 np0005596060 systemd[1]: run-r9853dffd663b4ed8ad00a4632550dfba.service: Deactivated successfully.
Jan 26 12:12:36 np0005596060 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 12:12:56 np0005596060 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 12:16:53 np0005596060 systemd-logind[786]: New session 7 of user zuul.
Jan 26 12:16:53 np0005596060 systemd[1]: Started Session 7 of User zuul.
Jan 26 12:16:53 np0005596060 python3[30044]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:16:55 np0005596060 python3[30160]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:16:56 np0005596060 python3[30233]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769447815.3861628-34066-244044280647424/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:16:56 np0005596060 python3[30259]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:16:56 np0005596060 python3[30332]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769447815.3861628-34066-244044280647424/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:16:57 np0005596060 python3[30358]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:16:57 np0005596060 python3[30431]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769447815.3861628-34066-244044280647424/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:16:57 np0005596060 python3[30457]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:16:58 np0005596060 python3[30530]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769447815.3861628-34066-244044280647424/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:16:58 np0005596060 python3[30556]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:16:58 np0005596060 python3[30629]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769447815.3861628-34066-244044280647424/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:16:58 np0005596060 python3[30655]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:16:59 np0005596060 python3[30728]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769447815.3861628-34066-244044280647424/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:16:59 np0005596060 python3[30754]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:16:59 np0005596060 python3[30827]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769447815.3861628-34066-244044280647424/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:17:12 np0005596060 python3[30885]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:22:11 np0005596060 systemd[1]: session-7.scope: Deactivated successfully.
Jan 26 12:22:11 np0005596060 systemd[1]: session-7.scope: Consumed 4.799s CPU time.
Jan 26 12:22:11 np0005596060 systemd-logind[786]: Session 7 logged out. Waiting for processes to exit.
Jan 26 12:22:11 np0005596060 systemd-logind[786]: Removed session 7.
Jan 26 12:29:55 np0005596060 systemd-logind[786]: New session 8 of user zuul.
Jan 26 12:29:55 np0005596060 systemd[1]: Started Session 8 of User zuul.
Jan 26 12:29:56 np0005596060 python3.9[31044]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:29:57 np0005596060 python3.9[31225]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:30:07 np0005596060 systemd[1]: session-8.scope: Deactivated successfully.
Jan 26 12:30:07 np0005596060 systemd[1]: session-8.scope: Consumed 8.327s CPU time.
Jan 26 12:30:07 np0005596060 systemd-logind[786]: Session 8 logged out. Waiting for processes to exit.
Jan 26 12:30:07 np0005596060 systemd-logind[786]: Removed session 8.
Jan 26 12:30:22 np0005596060 systemd-logind[786]: New session 9 of user zuul.
Jan 26 12:30:22 np0005596060 systemd[1]: Started Session 9 of User zuul.
Jan 26 12:30:23 np0005596060 python3.9[31435]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 26 12:30:24 np0005596060 python3.9[31609]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:30:25 np0005596060 python3.9[31761]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:30:26 np0005596060 python3.9[31914]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:30:27 np0005596060 python3.9[32066]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:30:27 np0005596060 python3.9[32218]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:30:28 np0005596060 python3.9[32342]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769448627.430653-177-117553471808079/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:30:29 np0005596060 python3.9[32494]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:30:30 np0005596060 python3.9[32650]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:30:31 np0005596060 python3.9[32802]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:30:32 np0005596060 python3.9[32952]: ansible-ansible.builtin.service_facts Invoked
Jan 26 12:30:35 np0005596060 python3.9[33205]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:30:36 np0005596060 python3.9[33355]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:30:37 np0005596060 python3.9[33509]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:30:39 np0005596060 python3.9[33667]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:30:39 np0005596060 python3.9[33751]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:31:24 np0005596060 systemd[1]: Reloading.
Jan 26 12:31:24 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:31:24 np0005596060 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 26 12:31:25 np0005596060 systemd[1]: Reloading.
Jan 26 12:31:25 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:31:25 np0005596060 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 26 12:31:25 np0005596060 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 26 12:31:25 np0005596060 systemd[1]: Reloading.
Jan 26 12:31:25 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:31:25 np0005596060 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 26 12:31:25 np0005596060 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 26 12:31:25 np0005596060 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 26 12:31:25 np0005596060 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 26 12:32:32 np0005596060 kernel: SELinux:  Converting 2724 SID table entries...
Jan 26 12:32:32 np0005596060 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 12:32:32 np0005596060 kernel: SELinux:  policy capability open_perms=1
Jan 26 12:32:32 np0005596060 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 12:32:32 np0005596060 kernel: SELinux:  policy capability always_check_network=0
Jan 26 12:32:32 np0005596060 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 12:32:32 np0005596060 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 12:32:32 np0005596060 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 12:32:32 np0005596060 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 26 12:32:32 np0005596060 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 12:32:32 np0005596060 systemd[1]: Starting man-db-cache-update.service...
Jan 26 12:32:32 np0005596060 systemd[1]: Reloading.
Jan 26 12:32:32 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:32:32 np0005596060 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 12:32:35 np0005596060 python3.9[35271]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:32:36 np0005596060 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 12:32:36 np0005596060 systemd[1]: Finished man-db-cache-update.service.
Jan 26 12:32:36 np0005596060 systemd[1]: man-db-cache-update.service: Consumed 1.233s CPU time.
Jan 26 12:32:36 np0005596060 systemd[1]: run-re4ba2cf8738c4e6ca1a0e70032243989.service: Deactivated successfully.
Jan 26 12:32:37 np0005596060 python3.9[35554]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 26 12:32:37 np0005596060 python3.9[35706]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 26 12:32:41 np0005596060 python3.9[35859]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:32:47 np0005596060 python3.9[36011]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 26 12:32:49 np0005596060 python3.9[36164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:32:54 np0005596060 python3.9[36316]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:32:54 np0005596060 python3.9[36439]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769448770.4005156-666-65209301999313/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9c020ad993969d6201452a9427187b11fbbe4910 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:32:55 np0005596060 python3.9[36591]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:32:56 np0005596060 python3.9[36743]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:32:57 np0005596060 python3.9[36896]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:32:58 np0005596060 python3.9[37048]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 26 12:32:58 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 12:32:58 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 12:32:59 np0005596060 python3.9[37202]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 12:33:00 np0005596060 python3.9[37360]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 12:33:01 np0005596060 python3.9[37520]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 26 12:33:02 np0005596060 python3.9[37673]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 12:33:03 np0005596060 python3.9[37831]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 26 12:33:04 np0005596060 python3.9[37983]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:33:07 np0005596060 python3.9[38136]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:33:08 np0005596060 python3.9[38288]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:33:08 np0005596060 python3.9[38411]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769448787.8659275-1023-179136253361949/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:33:10 np0005596060 python3.9[38563]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:33:10 np0005596060 systemd[1]: Starting Load Kernel Modules...
Jan 26 12:33:10 np0005596060 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 26 12:33:10 np0005596060 kernel: Bridge firewalling registered
Jan 26 12:33:10 np0005596060 systemd-modules-load[38567]: Inserted module 'br_netfilter'
Jan 26 12:33:10 np0005596060 systemd[1]: Finished Load Kernel Modules.
Jan 26 12:33:11 np0005596060 python3.9[38723]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:33:11 np0005596060 python3.9[38846]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769448790.5759387-1092-253240703345490/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:33:12 np0005596060 python3.9[38998]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:33:15 np0005596060 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 26 12:33:15 np0005596060 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 26 12:33:16 np0005596060 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 12:33:16 np0005596060 systemd[1]: Starting man-db-cache-update.service...
Jan 26 12:33:16 np0005596060 systemd[1]: Reloading.
Jan 26 12:33:16 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:33:16 np0005596060 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 12:33:18 np0005596060 python3.9[40369]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:33:19 np0005596060 python3.9[41568]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 26 12:33:20 np0005596060 python3.9[42284]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:33:21 np0005596060 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 12:33:21 np0005596060 systemd[1]: Finished man-db-cache-update.service.
Jan 26 12:33:21 np0005596060 systemd[1]: man-db-cache-update.service: Consumed 5.350s CPU time.
Jan 26 12:33:21 np0005596060 systemd[1]: run-ra98ddcfbfd294acdbf2ac042c5fe7ba9.service: Deactivated successfully.
Jan 26 12:33:21 np0005596060 irqbalance[781]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 26 12:33:21 np0005596060 irqbalance[781]: IRQ 27 affinity is now unmanaged
Jan 26 12:33:21 np0005596060 python3.9[43171]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:33:21 np0005596060 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 26 12:33:22 np0005596060 systemd[1]: Starting Authorization Manager...
Jan 26 12:33:22 np0005596060 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 26 12:33:22 np0005596060 polkitd[43388]: Started polkitd version 0.117
Jan 26 12:33:22 np0005596060 systemd[1]: Started Authorization Manager.
Jan 26 12:33:23 np0005596060 python3.9[43558]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:33:23 np0005596060 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 26 12:33:23 np0005596060 systemd[1]: tuned.service: Deactivated successfully.
Jan 26 12:33:23 np0005596060 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 26 12:33:23 np0005596060 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 26 12:33:23 np0005596060 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 26 12:33:24 np0005596060 python3.9[43719]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 26 12:33:28 np0005596060 python3.9[43871]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:33:28 np0005596060 systemd[1]: Reloading.
Jan 26 12:33:28 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:33:29 np0005596060 python3.9[44060]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:33:30 np0005596060 systemd[1]: Reloading.
Jan 26 12:33:30 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:33:31 np0005596060 python3.9[44250]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:33:32 np0005596060 python3.9[44403]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:33:32 np0005596060 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 26 12:33:33 np0005596060 python3.9[44556]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:33:35 np0005596060 python3.9[44718]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:33:36 np0005596060 python3.9[44871]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:33:36 np0005596060 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 26 12:33:36 np0005596060 systemd[1]: Stopped Apply Kernel Variables.
Jan 26 12:33:36 np0005596060 systemd[1]: Stopping Apply Kernel Variables...
Jan 26 12:33:36 np0005596060 systemd[1]: Starting Apply Kernel Variables...
Jan 26 12:33:36 np0005596060 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 26 12:33:36 np0005596060 systemd[1]: Finished Apply Kernel Variables.
Jan 26 12:33:37 np0005596060 systemd[1]: session-9.scope: Deactivated successfully.
Jan 26 12:33:37 np0005596060 systemd[1]: session-9.scope: Consumed 2min 16.138s CPU time.
Jan 26 12:33:37 np0005596060 systemd-logind[786]: Session 9 logged out. Waiting for processes to exit.
Jan 26 12:33:37 np0005596060 systemd-logind[786]: Removed session 9.
Jan 26 12:33:43 np0005596060 systemd-logind[786]: New session 10 of user zuul.
Jan 26 12:33:43 np0005596060 systemd[1]: Started Session 10 of User zuul.
Jan 26 12:33:45 np0005596060 python3.9[45054]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:33:46 np0005596060 python3.9[45210]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 26 12:33:47 np0005596060 python3.9[45363]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 12:33:48 np0005596060 python3.9[45521]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 12:33:49 np0005596060 python3.9[45681]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:33:50 np0005596060 python3.9[45765]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 12:33:53 np0005596060 python3.9[45928]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:34:09 np0005596060 kernel: SELinux:  Converting 2736 SID table entries...
Jan 26 12:34:09 np0005596060 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 12:34:09 np0005596060 kernel: SELinux:  policy capability open_perms=1
Jan 26 12:34:09 np0005596060 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 12:34:09 np0005596060 kernel: SELinux:  policy capability always_check_network=0
Jan 26 12:34:09 np0005596060 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 12:34:09 np0005596060 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 12:34:09 np0005596060 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 12:34:09 np0005596060 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 26 12:34:09 np0005596060 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 26 12:34:10 np0005596060 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 12:34:10 np0005596060 systemd[1]: Starting man-db-cache-update.service...
Jan 26 12:34:11 np0005596060 systemd[1]: Reloading.
Jan 26 12:34:11 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:34:11 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:34:11 np0005596060 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 12:34:11 np0005596060 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 12:34:11 np0005596060 systemd[1]: Finished man-db-cache-update.service.
Jan 26 12:34:11 np0005596060 systemd[1]: run-r2c5ae7aa068f4f0aae829beebafb5b78.service: Deactivated successfully.
Jan 26 12:34:12 np0005596060 python3.9[47026]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 12:34:13 np0005596060 systemd[1]: Reloading.
Jan 26 12:34:13 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:34:13 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:34:13 np0005596060 systemd[1]: Starting Open vSwitch Database Unit...
Jan 26 12:34:13 np0005596060 chown[47067]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 26 12:34:13 np0005596060 ovs-ctl[47072]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 26 12:34:13 np0005596060 ovs-ctl[47072]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 26 12:34:13 np0005596060 ovs-ctl[47072]: Starting ovsdb-server [  OK  ]
Jan 26 12:34:13 np0005596060 ovs-vsctl[47121]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 26 12:34:13 np0005596060 ovs-vsctl[47141]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c76f2593-4bbb-4cef-b447-9e180245ada6\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 26 12:34:13 np0005596060 ovs-ctl[47072]: Configuring Open vSwitch system IDs [  OK  ]
Jan 26 12:34:13 np0005596060 ovs-vsctl[47147]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 26 12:34:13 np0005596060 ovs-ctl[47072]: Enabling remote OVSDB managers [  OK  ]
Jan 26 12:34:13 np0005596060 systemd[1]: Started Open vSwitch Database Unit.
Jan 26 12:34:13 np0005596060 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 26 12:34:13 np0005596060 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 26 12:34:13 np0005596060 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 26 12:34:13 np0005596060 kernel: openvswitch: Open vSwitch switching datapath
Jan 26 12:34:13 np0005596060 ovs-ctl[47191]: Inserting openvswitch module [  OK  ]
Jan 26 12:34:13 np0005596060 ovs-ctl[47160]: Starting ovs-vswitchd [  OK  ]
Jan 26 12:34:13 np0005596060 ovs-ctl[47160]: Enabling remote OVSDB managers [  OK  ]
Jan 26 12:34:13 np0005596060 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 26 12:34:13 np0005596060 ovs-vsctl[47208]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 26 12:34:13 np0005596060 systemd[1]: Starting Open vSwitch...
Jan 26 12:34:13 np0005596060 systemd[1]: Finished Open vSwitch.
Jan 26 12:34:14 np0005596060 python3.9[47360]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:34:15 np0005596060 python3.9[47512]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 26 12:34:17 np0005596060 kernel: SELinux:  Converting 2750 SID table entries...
Jan 26 12:34:17 np0005596060 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 12:34:17 np0005596060 kernel: SELinux:  policy capability open_perms=1
Jan 26 12:34:17 np0005596060 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 12:34:17 np0005596060 kernel: SELinux:  policy capability always_check_network=0
Jan 26 12:34:17 np0005596060 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 12:34:17 np0005596060 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 12:34:17 np0005596060 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 12:34:18 np0005596060 python3.9[47667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:34:19 np0005596060 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 26 12:34:19 np0005596060 python3.9[47825]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:34:22 np0005596060 python3.9[47978]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:34:23 np0005596060 python3.9[48266]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 26 12:34:24 np0005596060 python3.9[48416]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:34:25 np0005596060 python3.9[48570]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:34:27 np0005596060 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 12:34:27 np0005596060 systemd[1]: Starting man-db-cache-update.service...
Jan 26 12:34:27 np0005596060 systemd[1]: Reloading.
Jan 26 12:34:27 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:34:27 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:34:27 np0005596060 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 12:34:28 np0005596060 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 12:34:28 np0005596060 systemd[1]: Finished man-db-cache-update.service.
Jan 26 12:34:28 np0005596060 systemd[1]: run-r149d3affed1d4d08905d89548e0c978e.service: Deactivated successfully.
Jan 26 12:34:29 np0005596060 python3.9[48887]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:34:29 np0005596060 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 26 12:34:29 np0005596060 systemd[1]: Stopped Network Manager Wait Online.
Jan 26 12:34:29 np0005596060 systemd[1]: Stopping Network Manager Wait Online...
Jan 26 12:34:29 np0005596060 systemd[1]: Stopping Network Manager...
Jan 26 12:34:29 np0005596060 NetworkManager[7217]: <info>  [1769448869.3080] caught SIGTERM, shutting down normally.
Jan 26 12:34:29 np0005596060 NetworkManager[7217]: <info>  [1769448869.3091] dhcp4 (eth0): canceled DHCP transaction
Jan 26 12:34:29 np0005596060 NetworkManager[7217]: <info>  [1769448869.3092] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 12:34:29 np0005596060 NetworkManager[7217]: <info>  [1769448869.3092] dhcp4 (eth0): state changed no lease
Jan 26 12:34:29 np0005596060 NetworkManager[7217]: <info>  [1769448869.3094] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 12:34:29 np0005596060 NetworkManager[7217]: <info>  [1769448869.3152] exiting (success)
Jan 26 12:34:29 np0005596060 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 12:34:29 np0005596060 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 12:34:29 np0005596060 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 26 12:34:29 np0005596060 systemd[1]: Stopped Network Manager.
Jan 26 12:34:29 np0005596060 systemd[1]: NetworkManager.service: Consumed 13.217s CPU time, 4.4M memory peak, read 0B from disk, written 32.0K to disk.
Jan 26 12:34:29 np0005596060 systemd[1]: Starting Network Manager...
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.3807] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:529c9622-ee15-4af2-bc4b-6a5a72e6844b)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.3808] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.3853] manager[0x564eab1b6000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 26 12:34:29 np0005596060 systemd[1]: Starting Hostname Service...
Jan 26 12:34:29 np0005596060 systemd[1]: Started Hostname Service.
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4612] hostname: hostname: using hostnamed
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4613] hostname: static hostname changed from (none) to "compute-0"
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4617] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4621] manager[0x564eab1b6000]: rfkill: Wi-Fi hardware radio set enabled
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4622] manager[0x564eab1b6000]: rfkill: WWAN hardware radio set enabled
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4639] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4647] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4647] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4648] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4648] manager: Networking is enabled by state file
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4650] settings: Loaded settings plugin: keyfile (internal)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4653] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4676] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4684] dhcp: init: Using DHCP client 'internal'
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4686] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4690] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4696] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4704] device (lo): Activation: starting connection 'lo' (8d71b0e0-2bcd-4be2-b29f-3f05a483f058)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4709] device (eth0): carrier: link connected
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4712] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4716] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4716] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4721] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4726] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4731] device (eth1): carrier: link connected
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4736] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4741] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (132143e7-a9aa-5379-9229-ae1e0c5a0fb3) (indicated)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4742] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4747] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4754] device (eth1): Activation: starting connection 'ci-private-network' (132143e7-a9aa-5379-9229-ae1e0c5a0fb3)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4760] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 26 12:34:29 np0005596060 systemd[1]: Started Network Manager.
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4776] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4778] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4780] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4783] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4785] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4787] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4790] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4794] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4800] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4802] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4809] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4821] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4829] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4831] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4836] device (lo): Activation: successful, device activated.
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4844] dhcp4 (eth0): state changed new lease, address=38.129.56.171
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4850] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 26 12:34:29 np0005596060 systemd[1]: Starting Network Manager Wait Online...
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4907] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4911] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4916] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4919] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4922] device (eth1): Activation: successful, device activated.
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4930] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4931] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4934] manager: NetworkManager state is now CONNECTED_SITE
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4936] device (eth0): Activation: successful, device activated.
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4940] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 26 12:34:29 np0005596060 NetworkManager[48900]: <info>  [1769448869.4944] manager: startup complete
Jan 26 12:34:29 np0005596060 systemd[1]: Finished Network Manager Wait Online.
Jan 26 12:34:31 np0005596060 python3.9[49113]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:34:37 np0005596060 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 12:34:37 np0005596060 systemd[1]: Starting man-db-cache-update.service...
Jan 26 12:34:37 np0005596060 systemd[1]: Reloading.
Jan 26 12:34:37 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:34:37 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:34:37 np0005596060 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 12:34:39 np0005596060 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 12:34:39 np0005596060 systemd[1]: Finished man-db-cache-update.service.
Jan 26 12:34:39 np0005596060 systemd[1]: run-rdaa1fcc23e544189af1f4ad65ba13cd6.service: Deactivated successfully.
Jan 26 12:34:39 np0005596060 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 12:34:43 np0005596060 python3.9[49573]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:34:44 np0005596060 python3.9[49725]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:34:45 np0005596060 python3.9[49879]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:34:46 np0005596060 python3.9[50031]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:34:46 np0005596060 python3.9[50183]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:34:47 np0005596060 python3.9[50335]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:34:48 np0005596060 python3.9[50487]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:34:49 np0005596060 python3.9[50610]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769448887.919583-647-10513366732703/.source _original_basename=.fnpvr1c6 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:34:49 np0005596060 python3.9[50762]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:34:50 np0005596060 python3.9[50914]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 26 12:34:51 np0005596060 irqbalance[781]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 26 12:34:51 np0005596060 irqbalance[781]: IRQ 26 affinity is now unmanaged
Jan 26 12:34:51 np0005596060 python3.9[51066]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:34:53 np0005596060 python3.9[51493]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 26 12:34:55 np0005596060 ansible-async_wrapper.py[51668]: Invoked with j193667607877 300 /home/zuul/.ansible/tmp/ansible-tmp-1769448894.1923969-845-143672708894013/AnsiballZ_edpm_os_net_config.py _
Jan 26 12:34:55 np0005596060 ansible-async_wrapper.py[51671]: Starting module and watcher
Jan 26 12:34:55 np0005596060 ansible-async_wrapper.py[51671]: Start watching 51672 (300)
Jan 26 12:34:55 np0005596060 ansible-async_wrapper.py[51672]: Start module (51672)
Jan 26 12:34:55 np0005596060 ansible-async_wrapper.py[51668]: Return async_wrapper task started.
Jan 26 12:34:55 np0005596060 python3.9[51673]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 26 12:34:56 np0005596060 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 26 12:34:56 np0005596060 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 26 12:34:56 np0005596060 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 26 12:34:56 np0005596060 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 26 12:34:56 np0005596060 kernel: cfg80211: failed to load regulatory.db
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.1719] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.1743] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2221] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2223] audit: op="connection-add" uuid="ac8655bb-3e7f-4ad8-b938-f83dbf1739bc" name="br-ex-br" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2236] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2238] audit: op="connection-add" uuid="ef260177-7bde-4ab8-80b9-0d9151f7ffbe" name="br-ex-port" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2249] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2250] audit: op="connection-add" uuid="6810ce51-0fae-4de8-a110-e9a2187095ab" name="eth1-port" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2264] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2266] audit: op="connection-add" uuid="8a901528-dd9b-4201-84b3-d59b7b2bece5" name="vlan20-port" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2280] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2282] audit: op="connection-add" uuid="84eada3e-f0f7-4fb4-80d5-2032ad1bcf91" name="vlan21-port" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2302] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2304] audit: op="connection-add" uuid="0036d6b3-b701-4ee1-9569-d8c599dd0804" name="vlan22-port" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2316] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2318] audit: op="connection-add" uuid="f457f7b0-88fe-4dd8-947c-d46fbd0ac5d9" name="vlan23-port" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2356] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2376] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2379] audit: op="connection-add" uuid="419ad466-a402-4373-b613-394f6a598030" name="br-ex-if" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2543] audit: op="connection-update" uuid="132143e7-a9aa-5379-9229-ae1e0c5a0fb3" name="ci-private-network" args="ovs-external-ids.data,connection.controller,connection.slave-type,connection.port-type,connection.timestamp,connection.master,ipv4.dns,ipv4.routing-rules,ipv4.addresses,ipv4.routes,ipv4.method,ipv4.never-default,ovs-interface.type,ipv6.dns,ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.addresses,ipv6.routes,ipv6.method" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2567] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2569] audit: op="connection-add" uuid="4be4efaa-fc2c-4a4a-ba16-c351c743a957" name="vlan20-if" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2587] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2588] audit: op="connection-add" uuid="50ca473b-3bd4-416c-887a-7f531959be23" name="vlan21-if" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2605] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2606] audit: op="connection-add" uuid="53fabde0-60cf-4f27-b1c7-2dabbc8e775f" name="vlan22-if" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2633] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2635] audit: op="connection-add" uuid="7a60b9b2-03cb-40a6-8b0f-7beb66f10262" name="vlan23-if" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2646] audit: op="connection-delete" uuid="811d519c-2a87-337a-ad17-b13888f5f045" name="Wired connection 1" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2659] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2662] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2669] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2673] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (ac8655bb-3e7f-4ad8-b938-f83dbf1739bc)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2674] audit: op="connection-activate" uuid="ac8655bb-3e7f-4ad8-b938-f83dbf1739bc" name="br-ex-br" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2676] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2676] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2682] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2686] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (ef260177-7bde-4ab8-80b9-0d9151f7ffbe)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2687] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2688] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2692] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2696] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (6810ce51-0fae-4de8-a110-e9a2187095ab)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2699] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2699] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2705] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2709] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (8a901528-dd9b-4201-84b3-d59b7b2bece5)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2710] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2711] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2716] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2720] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (84eada3e-f0f7-4fb4-80d5-2032ad1bcf91)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2722] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2722] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2727] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2731] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (0036d6b3-b701-4ee1-9569-d8c599dd0804)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2733] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2734] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2739] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2743] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (f457f7b0-88fe-4dd8-947c-d46fbd0ac5d9)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2744] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2746] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2747] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2753] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2753] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2755] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2759] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (419ad466-a402-4373-b613-394f6a598030)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2760] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2762] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2764] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2764] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2766] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2775] device (eth1): disconnecting for new activation request.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2776] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2778] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2780] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2781] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2783] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2784] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2786] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2791] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (4be4efaa-fc2c-4a4a-ba16-c351c743a957)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2792] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2795] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2796] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2797] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2799] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2799] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2802] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2805] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (50ca473b-3bd4-416c-887a-7f531959be23)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2805] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2808] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2810] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2811] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2814] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2814] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2817] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2821] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (53fabde0-60cf-4f27-b1c7-2dabbc8e775f)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2822] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2824] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2826] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2827] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2830] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <warn>  [1769448897.2831] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2834] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2839] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (7a60b9b2-03cb-40a6-8b0f-7beb66f10262)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2840] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2842] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2844] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2845] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2846] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2858] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2861] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2864] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2866] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2877] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2882] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2886] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 kernel: ovs-system: entered promiscuous mode
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2890] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2894] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2900] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2909] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 systemd-udevd[51679]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2914] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2917] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2922] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2928] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 kernel: Timeout policy base is empty
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2934] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2936] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2942] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2947] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2951] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2954] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2960] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2968] dhcp4 (eth0): canceled DHCP transaction
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2969] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2969] dhcp4 (eth0): state changed no lease
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2972] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2986] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.2997] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51674 uid=0 result="fail" reason="Device is not activated"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3002] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3011] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3026] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3038] dhcp4 (eth0): state changed new lease, address=38.129.56.171
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3043] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3164] device (eth1): disconnecting for new activation request.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3166] audit: op="connection-activate" uuid="132143e7-a9aa-5379-9229-ae1e0c5a0fb3" name="ci-private-network" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3175] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 26 12:34:57 np0005596060 kernel: br-ex: entered promiscuous mode
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3330] device (eth1): Activation: starting connection 'ci-private-network' (132143e7-a9aa-5379-9229-ae1e0c5a0fb3)
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3334] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3335] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3336] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3337] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3340] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3341] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3342] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3354] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3358] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3366] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3370] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3374] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3377] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3380] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3383] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3386] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3388] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3392] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3395] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3398] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3402] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3405] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3408] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3414] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51674 uid=0 result="success"
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3418] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3436] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3438] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 26 12:34:57 np0005596060 kernel: vlan22: entered promiscuous mode
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3441] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3464] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3472] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3474] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3478] device (eth1): Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 kernel: vlan21: entered promiscuous mode
Jan 26 12:34:57 np0005596060 systemd-udevd[51680]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 12:34:57 np0005596060 kernel: vlan20: entered promiscuous mode
Jan 26 12:34:57 np0005596060 systemd-udevd[51678]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 12:34:57 np0005596060 kernel: vlan23: entered promiscuous mode
Jan 26 12:34:57 np0005596060 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3740] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3748] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3755] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3759] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3806] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3815] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3825] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3856] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3865] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3872] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3887] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3893] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3896] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3900] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3904] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3905] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3908] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3917] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3918] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3920] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3924] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3930] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 26 12:34:57 np0005596060 NetworkManager[48900]: <info>  [1769448897.3934] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 26 12:34:58 np0005596060 NetworkManager[48900]: <info>  [1769448898.5509] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51674 uid=0 result="success"
Jan 26 12:34:58 np0005596060 NetworkManager[48900]: <info>  [1769448898.7053] checkpoint[0x564eab18b950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 26 12:34:58 np0005596060 NetworkManager[48900]: <info>  [1769448898.7056] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51674 uid=0 result="success"
Jan 26 12:34:58 np0005596060 python3.9[52032]: ansible-ansible.legacy.async_status Invoked with jid=j193667607877.51668 mode=status _async_dir=/root/.ansible_async
Jan 26 12:34:59 np0005596060 NetworkManager[48900]: <info>  [1769448899.0265] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51674 uid=0 result="success"
Jan 26 12:34:59 np0005596060 NetworkManager[48900]: <info>  [1769448899.0276] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51674 uid=0 result="success"
Jan 26 12:34:59 np0005596060 NetworkManager[48900]: <info>  [1769448899.2313] audit: op="networking-control" arg="global-dns-configuration" pid=51674 uid=0 result="success"
Jan 26 12:34:59 np0005596060 NetworkManager[48900]: <info>  [1769448899.2339] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 26 12:34:59 np0005596060 NetworkManager[48900]: <info>  [1769448899.2368] audit: op="networking-control" arg="global-dns-configuration" pid=51674 uid=0 result="success"
Jan 26 12:34:59 np0005596060 NetworkManager[48900]: <info>  [1769448899.2391] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51674 uid=0 result="success"
Jan 26 12:34:59 np0005596060 NetworkManager[48900]: <info>  [1769448899.4036] checkpoint[0x564eab18ba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 26 12:34:59 np0005596060 NetworkManager[48900]: <info>  [1769448899.4041] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51674 uid=0 result="success"
Jan 26 12:34:59 np0005596060 ansible-async_wrapper.py[51672]: Module complete (51672)
Jan 26 12:34:59 np0005596060 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 26 12:35:00 np0005596060 ansible-async_wrapper.py[51671]: Done in kid B.
Jan 26 12:35:02 np0005596060 python3.9[52140]: ansible-ansible.legacy.async_status Invoked with jid=j193667607877.51668 mode=status _async_dir=/root/.ansible_async
Jan 26 12:35:02 np0005596060 python3.9[52240]: ansible-ansible.legacy.async_status Invoked with jid=j193667607877.51668 mode=cleanup _async_dir=/root/.ansible_async
Jan 26 12:35:04 np0005596060 python3.9[52392]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:35:05 np0005596060 python3.9[52515]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769448904.0735798-926-185818146369624/.source.returncode _original_basename=.tqw93yba follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:35:05 np0005596060 python3.9[52668]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:35:06 np0005596060 python3.9[52791]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769448905.4140465-974-155212077806715/.source.cfg _original_basename=.p90llzim follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:35:07 np0005596060 python3.9[52943]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:35:07 np0005596060 systemd[1]: Reloading Network Manager...
Jan 26 12:35:07 np0005596060 NetworkManager[48900]: <info>  [1769448907.2655] audit: op="reload" arg="0" pid=52947 uid=0 result="success"
Jan 26 12:35:07 np0005596060 NetworkManager[48900]: <info>  [1769448907.2664] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 26 12:35:07 np0005596060 systemd[1]: Reloaded Network Manager.
Jan 26 12:35:08 np0005596060 systemd[1]: session-10.scope: Deactivated successfully.
Jan 26 12:35:08 np0005596060 systemd[1]: session-10.scope: Consumed 51.234s CPU time.
Jan 26 12:35:08 np0005596060 systemd-logind[786]: Session 10 logged out. Waiting for processes to exit.
Jan 26 12:35:08 np0005596060 systemd-logind[786]: Removed session 10.
Jan 26 12:35:13 np0005596060 systemd-logind[786]: New session 11 of user zuul.
Jan 26 12:35:13 np0005596060 systemd[1]: Started Session 11 of User zuul.
Jan 26 12:35:14 np0005596060 python3.9[53131]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:35:15 np0005596060 python3.9[53286]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:35:17 np0005596060 python3.9[53479]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:35:17 np0005596060 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 26 12:35:17 np0005596060 systemd[1]: session-11.scope: Deactivated successfully.
Jan 26 12:35:17 np0005596060 systemd[1]: session-11.scope: Consumed 2.583s CPU time.
Jan 26 12:35:17 np0005596060 systemd-logind[786]: Session 11 logged out. Waiting for processes to exit.
Jan 26 12:35:17 np0005596060 systemd-logind[786]: Removed session 11.
Jan 26 12:35:22 np0005596060 systemd-logind[786]: New session 12 of user zuul.
Jan 26 12:35:22 np0005596060 systemd[1]: Started Session 12 of User zuul.
Jan 26 12:35:23 np0005596060 python3.9[53661]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:35:24 np0005596060 python3.9[53815]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:35:25 np0005596060 python3.9[53972]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:35:26 np0005596060 python3.9[54056]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:35:28 np0005596060 python3.9[54210]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:35:30 np0005596060 python3.9[54405]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:35:31 np0005596060 python3.9[54557]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:35:31 np0005596060 podman[54558]: 2026-01-26 17:35:31.484598435 +0000 UTC m=+0.089961301 system refresh
Jan 26 12:35:32 np0005596060 python3.9[54720]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:35:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:35:33 np0005596060 python3.9[54843]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769448931.7908823-197-540012679974/.source.json follow=False _original_basename=podman_network_config.j2 checksum=cd846ecb2fc56a39b81db009292874c71385e73b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:35:33 np0005596060 python3.9[54995]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:35:34 np0005596060 python3.9[55118]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769448933.327897-242-189880408789094/.source.conf follow=False _original_basename=registries.conf.j2 checksum=d562ec5932fcff7c51e03321842af205a2feb813 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:35:35 np0005596060 python3.9[55270]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:35:35 np0005596060 python3.9[55422]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:35:36 np0005596060 python3.9[55574]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:35:37 np0005596060 python3.9[55726]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:35:38 np0005596060 python3.9[55878]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:35:40 np0005596060 python3.9[56031]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:35:41 np0005596060 python3.9[56185]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:35:42 np0005596060 python3.9[56337]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:35:43 np0005596060 python3.9[56489]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:35:44 np0005596060 python3.9[56642]: ansible-service_facts Invoked
Jan 26 12:35:44 np0005596060 network[56659]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 12:35:44 np0005596060 network[56660]: 'network-scripts' will be removed from distribution in near future.
Jan 26 12:35:44 np0005596060 network[56661]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 12:35:52 np0005596060 python3.9[57113]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:35:55 np0005596060 python3.9[57266]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 26 12:35:56 np0005596060 python3.9[57418]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:35:57 np0005596060 python3.9[57543]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769448955.9027894-674-266634097212820/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:35:57 np0005596060 python3.9[57697]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:35:58 np0005596060 python3.9[57822]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769448957.4519083-719-77510977633357/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:00 np0005596060 python3.9[57976]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:02 np0005596060 python3.9[58130]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:36:03 np0005596060 python3.9[58214]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:36:05 np0005596060 python3.9[58368]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:36:06 np0005596060 python3.9[58452]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:36:06 np0005596060 chronyd[789]: chronyd exiting
Jan 26 12:36:06 np0005596060 systemd[1]: Stopping NTP client/server...
Jan 26 12:36:06 np0005596060 systemd[1]: chronyd.service: Deactivated successfully.
Jan 26 12:36:06 np0005596060 systemd[1]: Stopped NTP client/server.
Jan 26 12:36:06 np0005596060 systemd[1]: Starting NTP client/server...
Jan 26 12:36:06 np0005596060 chronyd[58460]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 26 12:36:06 np0005596060 chronyd[58460]: Frequency -25.321 +/- 0.119 ppm read from /var/lib/chrony/drift
Jan 26 12:36:06 np0005596060 chronyd[58460]: Loaded seccomp filter (level 2)
Jan 26 12:36:06 np0005596060 systemd[1]: Started NTP client/server.
Jan 26 12:36:07 np0005596060 systemd[1]: session-12.scope: Deactivated successfully.
Jan 26 12:36:07 np0005596060 systemd[1]: session-12.scope: Consumed 25.986s CPU time.
Jan 26 12:36:07 np0005596060 systemd-logind[786]: Session 12 logged out. Waiting for processes to exit.
Jan 26 12:36:07 np0005596060 systemd-logind[786]: Removed session 12.
Jan 26 12:36:12 np0005596060 systemd-logind[786]: New session 13 of user zuul.
Jan 26 12:36:12 np0005596060 systemd[1]: Started Session 13 of User zuul.
Jan 26 12:36:13 np0005596060 python3.9[58642]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:13 np0005596060 python3.9[58794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:14 np0005596060 python3.9[58917]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769448973.3477936-62-72422377515046/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:15 np0005596060 systemd[1]: session-13.scope: Deactivated successfully.
Jan 26 12:36:15 np0005596060 systemd[1]: session-13.scope: Consumed 1.749s CPU time.
Jan 26 12:36:15 np0005596060 systemd-logind[786]: Session 13 logged out. Waiting for processes to exit.
Jan 26 12:36:15 np0005596060 systemd-logind[786]: Removed session 13.
Jan 26 12:36:20 np0005596060 systemd-logind[786]: New session 14 of user zuul.
Jan 26 12:36:20 np0005596060 systemd[1]: Started Session 14 of User zuul.
Jan 26 12:36:21 np0005596060 python3.9[59095]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:36:22 np0005596060 python3.9[59251]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:23 np0005596060 python3.9[59426]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:24 np0005596060 python3.9[59549]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769448982.635198-83-275557584771485/.source.json _original_basename=.9h4r4n9r follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:25 np0005596060 python3.9[59701]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:25 np0005596060 python3.9[59824]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769448984.634446-152-27456754139015/.source _original_basename=.ao2ph5gw follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:26 np0005596060 python3.9[59976]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:36:27 np0005596060 python3.9[60128]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:27 np0005596060 python3.9[60251]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769448986.7091477-224-277277718694012/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:36:28 np0005596060 python3.9[60403]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:28 np0005596060 python3.9[60526]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769448987.8837154-224-6274321319170/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:36:30 np0005596060 python3.9[60678]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:31 np0005596060 python3.9[60830]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:31 np0005596060 python3.9[60953]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769448990.5639005-335-160912431021061/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:32 np0005596060 python3.9[61105]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:33 np0005596060 python3.9[61228]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769448991.953127-380-129710349261196/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:34 np0005596060 python3.9[61380]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:36:34 np0005596060 systemd[1]: Reloading.
Jan 26 12:36:34 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:36:34 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:36:34 np0005596060 systemd[1]: Reloading.
Jan 26 12:36:34 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:36:34 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:36:34 np0005596060 systemd[1]: Starting EDPM Container Shutdown...
Jan 26 12:36:34 np0005596060 systemd[1]: Finished EDPM Container Shutdown.
Jan 26 12:36:35 np0005596060 python3.9[61607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:36 np0005596060 python3.9[61730]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769448995.071794-449-146997645922174/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:36 np0005596060 python3.9[61882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:37 np0005596060 python3.9[62005]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769448996.4644496-494-268716162013730/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:38 np0005596060 python3.9[62157]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:36:38 np0005596060 systemd[1]: Reloading.
Jan 26 12:36:38 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:36:38 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:36:38 np0005596060 systemd[1]: Reloading.
Jan 26 12:36:38 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:36:38 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:36:39 np0005596060 systemd[1]: Starting Create netns directory...
Jan 26 12:36:39 np0005596060 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 12:36:39 np0005596060 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 12:36:39 np0005596060 systemd[1]: Finished Create netns directory.
Jan 26 12:36:40 np0005596060 python3.9[62384]: ansible-ansible.builtin.service_facts Invoked
Jan 26 12:36:40 np0005596060 network[62401]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 12:36:40 np0005596060 network[62402]: 'network-scripts' will be removed from distribution in near future.
Jan 26 12:36:40 np0005596060 network[62403]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 12:36:43 np0005596060 python3.9[62665]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:36:43 np0005596060 systemd[1]: Reloading.
Jan 26 12:36:44 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:36:44 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:36:44 np0005596060 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 26 12:36:44 np0005596060 iptables.init[62705]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 26 12:36:44 np0005596060 iptables.init[62705]: iptables: Flushing firewall rules: [  OK  ]
Jan 26 12:36:44 np0005596060 systemd[1]: iptables.service: Deactivated successfully.
Jan 26 12:36:44 np0005596060 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 26 12:36:45 np0005596060 python3.9[62901]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:36:46 np0005596060 python3.9[63055]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:36:46 np0005596060 systemd[1]: Reloading.
Jan 26 12:36:46 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:36:46 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:36:46 np0005596060 systemd[1]: Starting Netfilter Tables...
Jan 26 12:36:46 np0005596060 systemd[1]: Finished Netfilter Tables.
Jan 26 12:36:48 np0005596060 python3.9[63247]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:36:49 np0005596060 python3.9[63400]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:50 np0005596060 python3.9[63525]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769449009.318725-701-103644239676907/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:51 np0005596060 python3.9[63678]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:36:51 np0005596060 systemd[1]: Reloading OpenSSH server daemon...
Jan 26 12:36:51 np0005596060 systemd[1]: Reloaded OpenSSH server daemon.
Jan 26 12:36:52 np0005596060 python3.9[63834]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:53 np0005596060 python3.9[63986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:53 np0005596060 python3.9[64109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449012.5870578-794-239602053603876/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:54 np0005596060 python3.9[64261]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 26 12:36:54 np0005596060 systemd[1]: Starting Time & Date Service...
Jan 26 12:36:55 np0005596060 systemd[1]: Started Time & Date Service.
Jan 26 12:36:55 np0005596060 python3.9[64418]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:56 np0005596060 python3.9[64570]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:57 np0005596060 python3.9[64693]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769449016.0614688-899-192011448449702/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:57 np0005596060 python3.9[64845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:58 np0005596060 python3.9[64968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769449017.4755454-944-197744663156057/.source.yaml _original_basename=.hc8fkivp follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:36:59 np0005596060 python3.9[65120]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:36:59 np0005596060 python3.9[65243]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449018.8682058-989-76669525597258/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:01 np0005596060 python3.9[65395]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:37:01 np0005596060 python3.9[65548]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:37:02 np0005596060 python3[65701]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 12:37:03 np0005596060 python3.9[65853]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:37:04 np0005596060 python3.9[65976]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449022.9641922-1106-131038527434616/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:04 np0005596060 python3.9[66128]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:37:05 np0005596060 python3.9[66251]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449024.3872733-1151-159026016007063/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:06 np0005596060 python3.9[66403]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:37:06 np0005596060 python3.9[66526]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449025.7312572-1196-37941988078635/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:07 np0005596060 python3.9[66678]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:37:08 np0005596060 python3.9[66801]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449027.1117303-1241-52990397641048/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:09 np0005596060 python3.9[66953]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:37:09 np0005596060 python3.9[67076]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449028.590308-1286-255976793155344/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:10 np0005596060 python3.9[67228]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:11 np0005596060 python3.9[67380]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:37:12 np0005596060 python3.9[67540]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:13 np0005596060 python3.9[67693]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:13 np0005596060 python3.9[67845]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:14 np0005596060 python3.9[67997]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 26 12:37:15 np0005596060 python3.9[68150]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 26 12:37:16 np0005596060 systemd[1]: session-14.scope: Deactivated successfully.
Jan 26 12:37:16 np0005596060 systemd[1]: session-14.scope: Consumed 36.585s CPU time.
Jan 26 12:37:16 np0005596060 systemd-logind[786]: Session 14 logged out. Waiting for processes to exit.
Jan 26 12:37:16 np0005596060 systemd-logind[786]: Removed session 14.
Jan 26 12:37:21 np0005596060 systemd-logind[786]: New session 15 of user zuul.
Jan 26 12:37:21 np0005596060 systemd[1]: Started Session 15 of User zuul.
Jan 26 12:37:22 np0005596060 python3.9[68331]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 26 12:37:23 np0005596060 python3.9[68483]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:37:24 np0005596060 python3.9[68635]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:37:25 np0005596060 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 26 12:37:25 np0005596060 python3.9[68789]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0zaLI2LTbNOyYJLCkCHwBNvCbWxyjbFipOdeKx9WVOSI6BraalDHlRpumUYDm8JC8abEq1qaZCBLmxjPXdZu5OGr/kPmf6SKEUmhy4iVIlqya8lpE59ci/zJO3FmNG+BncaGfJAQ0wqUgfNc/27u/wxD+gMrd6Ocz1dRHjtV22N4KnHAZP+sb0G1LZUx4WhJ07B4r/YaWeXOL2puHk0zHfnxSMIyyEvTlx9zlqSArxDuyq6AA7skTmkIlIC7eYbws7R3oP5PdtDl0sj1SEaTS4uAOSxbcYCV3H/IBa5evA+pxo7m3gf2YQ/QsGcfMQF4GefF3pWfZN0BGK7DWb3bckv62Oq9geYx47ccajXIEt3vsncvsrZhozX5OPyxW4eLJ8r7ovCX+5uGTuF9LrmwDdc7XRJ7rXBWSKh66/yxUcPGEQIk7OoEA30ZmKeipyMJQHHrWKxAqkqz6+ZQ41KvXaFIB1lRQf4tlFTAfrm9xwChyoCfrU95QYM4V+zqCQ6E=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILbMiL3+EkWDKAQHi9JT5Xqvk8rNrdT5SVX2Gg2RyqsV#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLVPallz3Z+vrxzfd9Dxuo/G10ZpIDOna2ftaoWWaEiUQrn77C3vB8d1zHHnHxMi8qaS4W4lfA32FenhGfBnVVU=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8WJSSyps5/MOwluaYVKvHLbB3OOMaGha+S5zKQqPSAcedSyuyvzK3GC+qad2ZbcfCfiNZHWM+ylBueRDL14BxpBXCAqNKHN1Yo1Fvlb4JCkcbhbgkVGemDEsbBiNmTtSlxRI40uI8M0+E42b22Zh7qz1PC1XmS0po5y6SwzcfgbnZtuyVFsvGHqDWkkWV/gsjiZ57qMaC+DJaIhvfW+qObinKJqXeuPQbF6yjfhXPHf2nwYEGY9rM5zEvZyfC/Dnrg62lDFjq4LGLrb83ipcBQq+zMejeECDs/u6noWAMs8f5HcxW0zembv86K5pOtPJKA13xVImv+kfGS+EctaKEBB/ooqOhN9AdXFEJUuSDn/2iUm07NnrEN9WhrfiuxLCO/lBWwxFGKcQECRviuCwE51F4fVEduv4ZiDgPcsHo+fYbxXsG50xc8/Yumd+a60pkpu09wVk1P3fCbFbRd9kD4elm067blILF+Zs+YuWnuaK3LiCb+qzmDKQB4AArubE=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILePT0ow4c3ejDoUzP/5T/dIHfr1xTtwEP/2z/Lf68vz#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0tLEQbxQsuF0gTFyU7HBbMRjNrt7rMl1+QXcK3yfs0Q29raINYHrTVwzWeSuTUiO464HBZr4aPyLzhd+2Z3xs=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOz38rnMu5RPID5R9a4AOkL2Ge6a4dzWxjmOZKIbuidITYge9lyZ+ThI161k8ZELWw9SBoQvNwVmySyCRLJH9qPhNCVmEqUqZJohUEZQ+lNpyZk3JkhZsgLTYjkdV/DPqp3iLlV/asPhl18j+CFKmN5Dx0qMsAg1f9CbOZwhdgeVEeB3IqdjBrPIMgAwVlacU9ty90SAUJj+RoMZePfAh7i2q7VTPHcvKRA1Mz4Q+RRKojI3DfR0se9vFL9KYNhD/O0JbAZksdom7tVuZ6LjcyIYqBUeB2jYwSO66sVFNWI4JwFEr5OOb1EiOGWGudWuZVfdeD+TYeZk0hco2GhtmXBVDWWeYQNNXAKRcQ7aM2y9SlN6gOKzJq08LuoShMOl8IuErTDV7Cp3WpuPPqDc5gv0swDVoOXsbju1Bxm2aLE7d1GiJbuhLS+pvIgc0MrnyOhUrTGTAdyfZ4gsw6BekK5Gf22C6xvZ865/N5LCr5jahKtqujZ6X6sECNsBQ1j0M=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOdmNmdvqfqzPDx4l6nvkEw8mwn78xc6LydRgAb6QEGT#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKb0RFR0G0BOVptSrXD3m/y/AD2q+whTWANps4FtvEcdq4zrHxHJM7JO/mkAyT4VEcyt7wmguNEWF5NqwEZeFZ4=#012 create=True mode=0644 path=/tmp/ansible.b4ehoc4x state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:26 np0005596060 python3.9[68941]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.b4ehoc4x' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:37:27 np0005596060 python3.9[69095]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.b4ehoc4x state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:27 np0005596060 systemd[1]: session-15.scope: Deactivated successfully.
Jan 26 12:37:27 np0005596060 systemd[1]: session-15.scope: Consumed 3.552s CPU time.
Jan 26 12:37:27 np0005596060 systemd-logind[786]: Session 15 logged out. Waiting for processes to exit.
Jan 26 12:37:27 np0005596060 systemd-logind[786]: Removed session 15.
Jan 26 12:37:33 np0005596060 systemd-logind[786]: New session 16 of user zuul.
Jan 26 12:37:33 np0005596060 systemd[1]: Started Session 16 of User zuul.
Jan 26 12:37:34 np0005596060 python3.9[69273]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:37:36 np0005596060 python3.9[69429]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 26 12:37:36 np0005596060 python3.9[69583]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:37:38 np0005596060 python3.9[69736]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:37:38 np0005596060 python3.9[69889]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:37:39 np0005596060 python3.9[70043]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:37:40 np0005596060 python3.9[70198]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:37:40 np0005596060 systemd[1]: session-16.scope: Deactivated successfully.
Jan 26 12:37:40 np0005596060 systemd[1]: session-16.scope: Consumed 4.755s CPU time.
Jan 26 12:37:40 np0005596060 systemd-logind[786]: Session 16 logged out. Waiting for processes to exit.
Jan 26 12:37:40 np0005596060 systemd-logind[786]: Removed session 16.
Jan 26 12:37:45 np0005596060 systemd-logind[786]: New session 17 of user zuul.
Jan 26 12:37:45 np0005596060 systemd[1]: Started Session 17 of User zuul.
Jan 26 12:37:46 np0005596060 python3.9[70376]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:37:48 np0005596060 python3.9[70532]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:37:48 np0005596060 python3.9[70616]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 12:37:51 np0005596060 python3.9[70767]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:37:52 np0005596060 python3.9[70918]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 12:37:53 np0005596060 python3.9[71068]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:37:53 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 12:37:54 np0005596060 python3.9[71219]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:37:54 np0005596060 systemd[1]: session-17.scope: Deactivated successfully.
Jan 26 12:37:54 np0005596060 systemd[1]: session-17.scope: Consumed 5.974s CPU time.
Jan 26 12:37:54 np0005596060 systemd-logind[786]: Session 17 logged out. Waiting for processes to exit.
Jan 26 12:37:54 np0005596060 systemd-logind[786]: Removed session 17.
Jan 26 12:38:03 np0005596060 systemd-logind[786]: New session 18 of user zuul.
Jan 26 12:38:03 np0005596060 systemd[1]: Started Session 18 of User zuul.
Jan 26 12:38:09 np0005596060 python3[71985]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:38:11 np0005596060 python3[72081]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 26 12:38:13 np0005596060 python3[72108]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 12:38:13 np0005596060 python3[72134]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:38:13 np0005596060 kernel: loop: module loaded
Jan 26 12:38:13 np0005596060 kernel: loop3: detected capacity change from 0 to 14680064
Jan 26 12:38:14 np0005596060 python3[72169]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:38:14 np0005596060 lvm[72172]: PV /dev/loop3 not used.
Jan 26 12:38:14 np0005596060 lvm[72174]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 12:38:14 np0005596060 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 26 12:38:14 np0005596060 lvm[72184]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 12:38:14 np0005596060 lvm[72184]: VG ceph_vg0 finished
Jan 26 12:38:14 np0005596060 lvm[72180]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 26 12:38:14 np0005596060 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 26 12:38:14 np0005596060 python3[72262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:38:15 np0005596060 python3[72335]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769449094.6407402-36955-132693725015663/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:38:16 np0005596060 chronyd[58460]: Selected source 23.159.16.194 (pool.ntp.org)
Jan 26 12:38:16 np0005596060 python3[72385]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:38:16 np0005596060 systemd[1]: Reloading.
Jan 26 12:38:16 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:38:16 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:38:16 np0005596060 systemd[1]: Starting Ceph OSD losetup...
Jan 26 12:38:16 np0005596060 bash[72424]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Jan 26 12:38:16 np0005596060 systemd[1]: Finished Ceph OSD losetup.
Jan 26 12:38:16 np0005596060 lvm[72425]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 12:38:16 np0005596060 lvm[72425]: VG ceph_vg0 finished
Jan 26 12:38:18 np0005596060 python3[72449]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:38:21 np0005596060 python3[72542]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 26 12:38:23 np0005596060 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 12:38:23 np0005596060 systemd[1]: Starting man-db-cache-update.service...
Jan 26 12:38:23 np0005596060 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 12:38:23 np0005596060 systemd[1]: Finished man-db-cache-update.service.
Jan 26 12:38:23 np0005596060 systemd[1]: run-rdff5bed2ea3742de9c6de16674556ec8.service: Deactivated successfully.
Jan 26 12:38:24 np0005596060 python3[72653]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 12:38:24 np0005596060 python3[72681]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:38:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:25 np0005596060 python3[72743]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:38:25 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:25 np0005596060 python3[72769]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:38:26 np0005596060 python3[72847]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:38:26 np0005596060 python3[72920]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769449106.257654-37146-154834643473901/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:38:27 np0005596060 python3[73022]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:38:28 np0005596060 python3[73095]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769449107.4904768-37164-119240447828205/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:38:28 np0005596060 python3[73145]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 12:38:28 np0005596060 python3[73173]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 12:38:29 np0005596060 python3[73201]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 12:38:29 np0005596060 python3[73227]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 12:38:30 np0005596060 python3[73253]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid d4cd1917-5876-51b6-bc64-65a16199754d --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:38:30 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:30 np0005596060 systemd-logind[786]: New session 19 of user ceph-admin.
Jan 26 12:38:30 np0005596060 systemd[1]: Created slice User Slice of UID 42477.
Jan 26 12:38:30 np0005596060 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 26 12:38:30 np0005596060 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 26 12:38:30 np0005596060 systemd[1]: Starting User Manager for UID 42477...
Jan 26 12:38:30 np0005596060 systemd[73272]: Queued start job for default target Main User Target.
Jan 26 12:38:30 np0005596060 systemd[73272]: Created slice User Application Slice.
Jan 26 12:38:30 np0005596060 systemd[73272]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 26 12:38:30 np0005596060 systemd[73272]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 12:38:30 np0005596060 systemd[73272]: Reached target Paths.
Jan 26 12:38:30 np0005596060 systemd[73272]: Reached target Timers.
Jan 26 12:38:30 np0005596060 systemd[73272]: Starting D-Bus User Message Bus Socket...
Jan 26 12:38:30 np0005596060 systemd[73272]: Starting Create User's Volatile Files and Directories...
Jan 26 12:38:30 np0005596060 systemd[73272]: Listening on D-Bus User Message Bus Socket.
Jan 26 12:38:30 np0005596060 systemd[73272]: Reached target Sockets.
Jan 26 12:38:30 np0005596060 systemd[73272]: Finished Create User's Volatile Files and Directories.
Jan 26 12:38:30 np0005596060 systemd[73272]: Reached target Basic System.
Jan 26 12:38:30 np0005596060 systemd[73272]: Reached target Main User Target.
Jan 26 12:38:30 np0005596060 systemd[73272]: Startup finished in 116ms.
Jan 26 12:38:30 np0005596060 systemd[1]: Started User Manager for UID 42477.
Jan 26 12:38:30 np0005596060 systemd[1]: Started Session 19 of User ceph-admin.
Jan 26 12:38:30 np0005596060 systemd[1]: session-19.scope: Deactivated successfully.
Jan 26 12:38:30 np0005596060 systemd-logind[786]: Session 19 logged out. Waiting for processes to exit.
Jan 26 12:38:30 np0005596060 systemd-logind[786]: Removed session 19.
Jan 26 12:38:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-compat1374985032-merged.mount: Deactivated successfully.
Jan 26 12:38:33 np0005596060 systemd[1]: var-lib-containers-storage-overlay-compat1374985032-lower\x2dmapped.mount: Deactivated successfully.
Jan 26 12:38:40 np0005596060 systemd[1]: Stopping User Manager for UID 42477...
Jan 26 12:38:40 np0005596060 systemd[73272]: Activating special unit Exit the Session...
Jan 26 12:38:40 np0005596060 systemd[73272]: Stopped target Main User Target.
Jan 26 12:38:40 np0005596060 systemd[73272]: Stopped target Basic System.
Jan 26 12:38:40 np0005596060 systemd[73272]: Stopped target Paths.
Jan 26 12:38:40 np0005596060 systemd[73272]: Stopped target Sockets.
Jan 26 12:38:40 np0005596060 systemd[73272]: Stopped target Timers.
Jan 26 12:38:40 np0005596060 systemd[73272]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 26 12:38:40 np0005596060 systemd[73272]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 26 12:38:40 np0005596060 systemd[73272]: Closed D-Bus User Message Bus Socket.
Jan 26 12:38:40 np0005596060 systemd[73272]: Stopped Create User's Volatile Files and Directories.
Jan 26 12:38:40 np0005596060 systemd[73272]: Removed slice User Application Slice.
Jan 26 12:38:40 np0005596060 systemd[73272]: Reached target Shutdown.
Jan 26 12:38:40 np0005596060 systemd[73272]: Finished Exit the Session.
Jan 26 12:38:40 np0005596060 systemd[73272]: Reached target Exit the Session.
Jan 26 12:38:40 np0005596060 systemd[1]: user@42477.service: Deactivated successfully.
Jan 26 12:38:40 np0005596060 systemd[1]: Stopped User Manager for UID 42477.
Jan 26 12:38:40 np0005596060 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 26 12:38:40 np0005596060 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 26 12:38:40 np0005596060 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 26 12:38:40 np0005596060 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 26 12:38:40 np0005596060 systemd[1]: Removed slice User Slice of UID 42477.
Jan 26 12:38:47 np0005596060 podman[73326]: 2026-01-26 17:38:47.825311536 +0000 UTC m=+17.088932776 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:47 np0005596060 podman[73385]: 2026-01-26 17:38:47.927845903 +0000 UTC m=+0.069509531 container create cc5ef00c79c78764629f678ebd63743858a344cde62fa3ad6ed99e039ffbdc6c (image=quay.io/ceph/ceph:v18, name=epic_edison, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 12:38:47 np0005596060 podman[73385]: 2026-01-26 17:38:47.879223562 +0000 UTC m=+0.020887170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:47 np0005596060 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 26 12:38:47 np0005596060 systemd[1]: Started libpod-conmon-cc5ef00c79c78764629f678ebd63743858a344cde62fa3ad6ed99e039ffbdc6c.scope.
Jan 26 12:38:48 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:48 np0005596060 podman[73385]: 2026-01-26 17:38:48.071156531 +0000 UTC m=+0.212820149 container init cc5ef00c79c78764629f678ebd63743858a344cde62fa3ad6ed99e039ffbdc6c (image=quay.io/ceph/ceph:v18, name=epic_edison, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:38:48 np0005596060 podman[73385]: 2026-01-26 17:38:48.079420049 +0000 UTC m=+0.221083637 container start cc5ef00c79c78764629f678ebd63743858a344cde62fa3ad6ed99e039ffbdc6c (image=quay.io/ceph/ceph:v18, name=epic_edison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:38:48 np0005596060 podman[73385]: 2026-01-26 17:38:48.114664132 +0000 UTC m=+0.256327780 container attach cc5ef00c79c78764629f678ebd63743858a344cde62fa3ad6ed99e039ffbdc6c (image=quay.io/ceph/ceph:v18, name=epic_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:38:48 np0005596060 epic_edison[73401]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 26 12:38:48 np0005596060 systemd[1]: libpod-cc5ef00c79c78764629f678ebd63743858a344cde62fa3ad6ed99e039ffbdc6c.scope: Deactivated successfully.
Jan 26 12:38:48 np0005596060 podman[73385]: 2026-01-26 17:38:48.374232874 +0000 UTC m=+0.515896472 container died cc5ef00c79c78764629f678ebd63743858a344cde62fa3ad6ed99e039ffbdc6c (image=quay.io/ceph/ceph:v18, name=epic_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:38:48 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b3e691d40278bc409ec32e99282fc0f80c512e2d399bebce0ff41288afb5c855-merged.mount: Deactivated successfully.
Jan 26 12:38:48 np0005596060 podman[73385]: 2026-01-26 17:38:48.41947592 +0000 UTC m=+0.561139508 container remove cc5ef00c79c78764629f678ebd63743858a344cde62fa3ad6ed99e039ffbdc6c (image=quay.io/ceph/ceph:v18, name=epic_edison, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:38:48 np0005596060 systemd[1]: libpod-conmon-cc5ef00c79c78764629f678ebd63743858a344cde62fa3ad6ed99e039ffbdc6c.scope: Deactivated successfully.
Jan 26 12:38:48 np0005596060 podman[73421]: 2026-01-26 17:38:48.487214505 +0000 UTC m=+0.043496822 container create 6846eb9facc843816a1d5edb27e5b225a5f95aa57f70228a7d7983933cf801db (image=quay.io/ceph/ceph:v18, name=dreamy_sanderson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 12:38:48 np0005596060 systemd[1]: Started libpod-conmon-6846eb9facc843816a1d5edb27e5b225a5f95aa57f70228a7d7983933cf801db.scope.
Jan 26 12:38:48 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:48 np0005596060 podman[73421]: 2026-01-26 17:38:48.542982177 +0000 UTC m=+0.099264514 container init 6846eb9facc843816a1d5edb27e5b225a5f95aa57f70228a7d7983933cf801db (image=quay.io/ceph/ceph:v18, name=dreamy_sanderson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:38:48 np0005596060 podman[73421]: 2026-01-26 17:38:48.548347693 +0000 UTC m=+0.104630010 container start 6846eb9facc843816a1d5edb27e5b225a5f95aa57f70228a7d7983933cf801db (image=quay.io/ceph/ceph:v18, name=dreamy_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:38:48 np0005596060 dreamy_sanderson[73436]: 167 167
Jan 26 12:38:48 np0005596060 systemd[1]: libpod-6846eb9facc843816a1d5edb27e5b225a5f95aa57f70228a7d7983933cf801db.scope: Deactivated successfully.
Jan 26 12:38:48 np0005596060 podman[73421]: 2026-01-26 17:38:48.551970645 +0000 UTC m=+0.108253012 container attach 6846eb9facc843816a1d5edb27e5b225a5f95aa57f70228a7d7983933cf801db (image=quay.io/ceph/ceph:v18, name=dreamy_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:38:48 np0005596060 podman[73421]: 2026-01-26 17:38:48.552809926 +0000 UTC m=+0.109092243 container died 6846eb9facc843816a1d5edb27e5b225a5f95aa57f70228a7d7983933cf801db (image=quay.io/ceph/ceph:v18, name=dreamy_sanderson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 12:38:48 np0005596060 podman[73421]: 2026-01-26 17:38:48.47001802 +0000 UTC m=+0.026300357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:48 np0005596060 podman[73421]: 2026-01-26 17:38:48.58852896 +0000 UTC m=+0.144811277 container remove 6846eb9facc843816a1d5edb27e5b225a5f95aa57f70228a7d7983933cf801db (image=quay.io/ceph/ceph:v18, name=dreamy_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:38:48 np0005596060 systemd[1]: libpod-conmon-6846eb9facc843816a1d5edb27e5b225a5f95aa57f70228a7d7983933cf801db.scope: Deactivated successfully.
Jan 26 12:38:48 np0005596060 podman[73453]: 2026-01-26 17:38:48.642948238 +0000 UTC m=+0.035988212 container create 4153e0d2f4d340d85ee409e00e1317c553497eaa3154fdf75ab6cc1d37b240aa (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:38:48 np0005596060 systemd[1]: Started libpod-conmon-4153e0d2f4d340d85ee409e00e1317c553497eaa3154fdf75ab6cc1d37b240aa.scope.
Jan 26 12:38:48 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:48 np0005596060 podman[73453]: 2026-01-26 17:38:48.704831935 +0000 UTC m=+0.097871929 container init 4153e0d2f4d340d85ee409e00e1317c553497eaa3154fdf75ab6cc1d37b240aa (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 12:38:48 np0005596060 podman[73453]: 2026-01-26 17:38:48.710469708 +0000 UTC m=+0.103509682 container start 4153e0d2f4d340d85ee409e00e1317c553497eaa3154fdf75ab6cc1d37b240aa (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:38:48 np0005596060 podman[73453]: 2026-01-26 17:38:48.714598042 +0000 UTC m=+0.107638046 container attach 4153e0d2f4d340d85ee409e00e1317c553497eaa3154fdf75ab6cc1d37b240aa (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 12:38:48 np0005596060 podman[73453]: 2026-01-26 17:38:48.627454346 +0000 UTC m=+0.020494340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:48 np0005596060 relaxed_kilby[73470]: AQCopndpsFeoKxAAD9lS5rkIdzcIWSZ8fy//Gg==
Jan 26 12:38:48 np0005596060 systemd[1]: libpod-4153e0d2f4d340d85ee409e00e1317c553497eaa3154fdf75ab6cc1d37b240aa.scope: Deactivated successfully.
Jan 26 12:38:48 np0005596060 podman[73453]: 2026-01-26 17:38:48.736433275 +0000 UTC m=+0.129473249 container died 4153e0d2f4d340d85ee409e00e1317c553497eaa3154fdf75ab6cc1d37b240aa (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:38:48 np0005596060 podman[73453]: 2026-01-26 17:38:48.774405177 +0000 UTC m=+0.167445141 container remove 4153e0d2f4d340d85ee409e00e1317c553497eaa3154fdf75ab6cc1d37b240aa (image=quay.io/ceph/ceph:v18, name=relaxed_kilby, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:38:48 np0005596060 systemd[1]: libpod-conmon-4153e0d2f4d340d85ee409e00e1317c553497eaa3154fdf75ab6cc1d37b240aa.scope: Deactivated successfully.
Jan 26 12:38:48 np0005596060 podman[73490]: 2026-01-26 17:38:48.842818109 +0000 UTC m=+0.043873642 container create f30e8c19bf81902473ee287ce02360e3f7f8cc6ae14e418900fd1ee6f8f716a9 (image=quay.io/ceph/ceph:v18, name=peaceful_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 26 12:38:48 np0005596060 systemd[1]: Started libpod-conmon-f30e8c19bf81902473ee287ce02360e3f7f8cc6ae14e418900fd1ee6f8f716a9.scope.
Jan 26 12:38:48 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:48 np0005596060 podman[73490]: 2026-01-26 17:38:48.824480385 +0000 UTC m=+0.025535938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:49 np0005596060 podman[73490]: 2026-01-26 17:38:49.144508187 +0000 UTC m=+0.345563740 container init f30e8c19bf81902473ee287ce02360e3f7f8cc6ae14e418900fd1ee6f8f716a9 (image=quay.io/ceph/ceph:v18, name=peaceful_einstein, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:38:49 np0005596060 podman[73490]: 2026-01-26 17:38:49.150640493 +0000 UTC m=+0.351696026 container start f30e8c19bf81902473ee287ce02360e3f7f8cc6ae14e418900fd1ee6f8f716a9 (image=quay.io/ceph/ceph:v18, name=peaceful_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:38:49 np0005596060 podman[73490]: 2026-01-26 17:38:49.154004728 +0000 UTC m=+0.355060261 container attach f30e8c19bf81902473ee287ce02360e3f7f8cc6ae14e418900fd1ee6f8f716a9 (image=quay.io/ceph/ceph:v18, name=peaceful_einstein, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:38:49 np0005596060 peaceful_einstein[73506]: AQCppndp4ggDChAAvNbKY7ULH1zHHQOnudFwBg==
Jan 26 12:38:49 np0005596060 systemd[1]: libpod-f30e8c19bf81902473ee287ce02360e3f7f8cc6ae14e418900fd1ee6f8f716a9.scope: Deactivated successfully.
Jan 26 12:38:49 np0005596060 podman[73490]: 2026-01-26 17:38:49.171092131 +0000 UTC m=+0.372147664 container died f30e8c19bf81902473ee287ce02360e3f7f8cc6ae14e418900fd1ee6f8f716a9 (image=quay.io/ceph/ceph:v18, name=peaceful_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:38:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8d0272de4043d14eb3f8fe8d3c5e7b1fb40d832aea339504d5de850514eeab62-merged.mount: Deactivated successfully.
Jan 26 12:38:49 np0005596060 podman[73490]: 2026-01-26 17:38:49.209051442 +0000 UTC m=+0.410106975 container remove f30e8c19bf81902473ee287ce02360e3f7f8cc6ae14e418900fd1ee6f8f716a9 (image=quay.io/ceph/ceph:v18, name=peaceful_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 26 12:38:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:49 np0005596060 systemd[1]: libpod-conmon-f30e8c19bf81902473ee287ce02360e3f7f8cc6ae14e418900fd1ee6f8f716a9.scope: Deactivated successfully.
Jan 26 12:38:49 np0005596060 podman[73525]: 2026-01-26 17:38:49.273571026 +0000 UTC m=+0.044679283 container create 309aa936b4534c899797ae89ef29d3f702e8bb0e79d2ebff394dc4e3c9a9d23a (image=quay.io/ceph/ceph:v18, name=zealous_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 12:38:49 np0005596060 systemd[1]: Started libpod-conmon-309aa936b4534c899797ae89ef29d3f702e8bb0e79d2ebff394dc4e3c9a9d23a.scope.
Jan 26 12:38:49 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:49 np0005596060 podman[73525]: 2026-01-26 17:38:49.340553282 +0000 UTC m=+0.111661519 container init 309aa936b4534c899797ae89ef29d3f702e8bb0e79d2ebff394dc4e3c9a9d23a (image=quay.io/ceph/ceph:v18, name=zealous_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:38:49 np0005596060 podman[73525]: 2026-01-26 17:38:49.346250286 +0000 UTC m=+0.117358543 container start 309aa936b4534c899797ae89ef29d3f702e8bb0e79d2ebff394dc4e3c9a9d23a (image=quay.io/ceph/ceph:v18, name=zealous_edison, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 12:38:49 np0005596060 podman[73525]: 2026-01-26 17:38:49.251098807 +0000 UTC m=+0.022207054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:49 np0005596060 podman[73525]: 2026-01-26 17:38:49.350319739 +0000 UTC m=+0.121427956 container attach 309aa936b4534c899797ae89ef29d3f702e8bb0e79d2ebff394dc4e3c9a9d23a (image=quay.io/ceph/ceph:v18, name=zealous_edison, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 12:38:49 np0005596060 zealous_edison[73541]: AQCppndpY4/IFRAAWyYS/RpeGeyehlVTEvbdXg==
Jan 26 12:38:49 np0005596060 systemd[1]: libpod-309aa936b4534c899797ae89ef29d3f702e8bb0e79d2ebff394dc4e3c9a9d23a.scope: Deactivated successfully.
Jan 26 12:38:49 np0005596060 podman[73525]: 2026-01-26 17:38:49.36930871 +0000 UTC m=+0.140416937 container died 309aa936b4534c899797ae89ef29d3f702e8bb0e79d2ebff394dc4e3c9a9d23a (image=quay.io/ceph/ceph:v18, name=zealous_edison, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:38:49 np0005596060 podman[73525]: 2026-01-26 17:38:49.408668216 +0000 UTC m=+0.179776433 container remove 309aa936b4534c899797ae89ef29d3f702e8bb0e79d2ebff394dc4e3c9a9d23a (image=quay.io/ceph/ceph:v18, name=zealous_edison, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:38:49 np0005596060 systemd[1]: libpod-conmon-309aa936b4534c899797ae89ef29d3f702e8bb0e79d2ebff394dc4e3c9a9d23a.scope: Deactivated successfully.
Jan 26 12:38:49 np0005596060 podman[73561]: 2026-01-26 17:38:49.47239106 +0000 UTC m=+0.042834406 container create 1a24340a816814d62e950a4f7abe6605c52b6ad5f934b0983cf103fca72890fb (image=quay.io/ceph/ceph:v18, name=clever_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 12:38:49 np0005596060 systemd[1]: Started libpod-conmon-1a24340a816814d62e950a4f7abe6605c52b6ad5f934b0983cf103fca72890fb.scope.
Jan 26 12:38:49 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/355f7a30cee6c9b8704a4f7fc34b868ad2fd746226bf581be5adb5bd710ae5d1/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:49 np0005596060 podman[73561]: 2026-01-26 17:38:49.54545328 +0000 UTC m=+0.115896666 container init 1a24340a816814d62e950a4f7abe6605c52b6ad5f934b0983cf103fca72890fb (image=quay.io/ceph/ceph:v18, name=clever_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 12:38:49 np0005596060 podman[73561]: 2026-01-26 17:38:49.453526592 +0000 UTC m=+0.023969988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:49 np0005596060 podman[73561]: 2026-01-26 17:38:49.551639817 +0000 UTC m=+0.122083163 container start 1a24340a816814d62e950a4f7abe6605c52b6ad5f934b0983cf103fca72890fb (image=quay.io/ceph/ceph:v18, name=clever_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 12:38:49 np0005596060 podman[73561]: 2026-01-26 17:38:49.555463413 +0000 UTC m=+0.125906799 container attach 1a24340a816814d62e950a4f7abe6605c52b6ad5f934b0983cf103fca72890fb (image=quay.io/ceph/ceph:v18, name=clever_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 12:38:49 np0005596060 clever_yalow[73577]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 26 12:38:49 np0005596060 clever_yalow[73577]: setting min_mon_release = pacific
Jan 26 12:38:49 np0005596060 clever_yalow[73577]: /usr/bin/monmaptool: set fsid to d4cd1917-5876-51b6-bc64-65a16199754d
Jan 26 12:38:49 np0005596060 clever_yalow[73577]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 26 12:38:49 np0005596060 systemd[1]: libpod-1a24340a816814d62e950a4f7abe6605c52b6ad5f934b0983cf103fca72890fb.scope: Deactivated successfully.
Jan 26 12:38:49 np0005596060 podman[73561]: 2026-01-26 17:38:49.586302694 +0000 UTC m=+0.156746040 container died 1a24340a816814d62e950a4f7abe6605c52b6ad5f934b0983cf103fca72890fb (image=quay.io/ceph/ceph:v18, name=clever_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 12:38:49 np0005596060 podman[73561]: 2026-01-26 17:38:49.630807131 +0000 UTC m=+0.201250487 container remove 1a24340a816814d62e950a4f7abe6605c52b6ad5f934b0983cf103fca72890fb (image=quay.io/ceph/ceph:v18, name=clever_yalow, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:38:49 np0005596060 systemd[1]: libpod-conmon-1a24340a816814d62e950a4f7abe6605c52b6ad5f934b0983cf103fca72890fb.scope: Deactivated successfully.
Jan 26 12:38:49 np0005596060 podman[73595]: 2026-01-26 17:38:49.706398365 +0000 UTC m=+0.050969851 container create 937a716ee586085fb696d8d2cfbfaaaf595a7fb9dec71c201897e288d30af2b1 (image=quay.io/ceph/ceph:v18, name=mystifying_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:38:49 np0005596060 systemd[1]: Started libpod-conmon-937a716ee586085fb696d8d2cfbfaaaf595a7fb9dec71c201897e288d30af2b1.scope.
Jan 26 12:38:49 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86625f8272b308b396edbda05006837b836dcecf0872f78f045bcaecf2308d7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86625f8272b308b396edbda05006837b836dcecf0872f78f045bcaecf2308d7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86625f8272b308b396edbda05006837b836dcecf0872f78f045bcaecf2308d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86625f8272b308b396edbda05006837b836dcecf0872f78f045bcaecf2308d7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:49 np0005596060 podman[73595]: 2026-01-26 17:38:49.678726644 +0000 UTC m=+0.023298150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:49 np0005596060 podman[73595]: 2026-01-26 17:38:49.793610483 +0000 UTC m=+0.138181989 container init 937a716ee586085fb696d8d2cfbfaaaf595a7fb9dec71c201897e288d30af2b1 (image=quay.io/ceph/ceph:v18, name=mystifying_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 12:38:49 np0005596060 podman[73595]: 2026-01-26 17:38:49.805772111 +0000 UTC m=+0.150343597 container start 937a716ee586085fb696d8d2cfbfaaaf595a7fb9dec71c201897e288d30af2b1 (image=quay.io/ceph/ceph:v18, name=mystifying_cori, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 12:38:49 np0005596060 podman[73595]: 2026-01-26 17:38:49.809539787 +0000 UTC m=+0.154111293 container attach 937a716ee586085fb696d8d2cfbfaaaf595a7fb9dec71c201897e288d30af2b1 (image=quay.io/ceph/ceph:v18, name=mystifying_cori, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 26 12:38:49 np0005596060 systemd[1]: libpod-937a716ee586085fb696d8d2cfbfaaaf595a7fb9dec71c201897e288d30af2b1.scope: Deactivated successfully.
Jan 26 12:38:49 np0005596060 podman[73595]: 2026-01-26 17:38:49.896773145 +0000 UTC m=+0.241344681 container died 937a716ee586085fb696d8d2cfbfaaaf595a7fb9dec71c201897e288d30af2b1 (image=quay.io/ceph/ceph:v18, name=mystifying_cori, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:38:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a86625f8272b308b396edbda05006837b836dcecf0872f78f045bcaecf2308d7-merged.mount: Deactivated successfully.
Jan 26 12:38:49 np0005596060 podman[73595]: 2026-01-26 17:38:49.942239867 +0000 UTC m=+0.286811373 container remove 937a716ee586085fb696d8d2cfbfaaaf595a7fb9dec71c201897e288d30af2b1 (image=quay.io/ceph/ceph:v18, name=mystifying_cori, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:38:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:49 np0005596060 systemd[1]: libpod-conmon-937a716ee586085fb696d8d2cfbfaaaf595a7fb9dec71c201897e288d30af2b1.scope: Deactivated successfully.
Jan 26 12:38:50 np0005596060 systemd[1]: Reloading.
Jan 26 12:38:50 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:38:50 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:38:50 np0005596060 systemd[1]: Reloading.
Jan 26 12:38:50 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:38:50 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:38:50 np0005596060 systemd[1]: Reached target All Ceph clusters and services.
Jan 26 12:38:50 np0005596060 systemd[1]: Reloading.
Jan 26 12:38:50 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:38:50 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:38:50 np0005596060 systemd[1]: Reached target Ceph cluster d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:38:50 np0005596060 systemd[1]: Reloading.
Jan 26 12:38:50 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:38:50 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:38:51 np0005596060 systemd[1]: Reloading.
Jan 26 12:38:51 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:38:51 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:38:51 np0005596060 systemd[1]: Created slice Slice /system/ceph-d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:38:51 np0005596060 systemd[1]: Reached target System Time Set.
Jan 26 12:38:51 np0005596060 systemd[1]: Reached target System Time Synchronized.
Jan 26 12:38:51 np0005596060 systemd[1]: Starting Ceph mon.compute-0 for d4cd1917-5876-51b6-bc64-65a16199754d...
Jan 26 12:38:51 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:51 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:51 np0005596060 podman[73889]: 2026-01-26 17:38:51.549670357 +0000 UTC m=+0.057848315 container create 80f51b5beb505d2585de8a3ff8de447ceee34cb69a7f869014996db0f84489bb (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:38:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/822442bf8eb4cff7e27f82073ccb78f34bab2517271e81c88aedf34ef8210d37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/822442bf8eb4cff7e27f82073ccb78f34bab2517271e81c88aedf34ef8210d37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/822442bf8eb4cff7e27f82073ccb78f34bab2517271e81c88aedf34ef8210d37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/822442bf8eb4cff7e27f82073ccb78f34bab2517271e81c88aedf34ef8210d37/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:51 np0005596060 podman[73889]: 2026-01-26 17:38:51.530088271 +0000 UTC m=+0.038266269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:51 np0005596060 podman[73889]: 2026-01-26 17:38:51.644087168 +0000 UTC m=+0.152265206 container init 80f51b5beb505d2585de8a3ff8de447ceee34cb69a7f869014996db0f84489bb (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:38:51 np0005596060 podman[73889]: 2026-01-26 17:38:51.652558842 +0000 UTC m=+0.160736840 container start 80f51b5beb505d2585de8a3ff8de447ceee34cb69a7f869014996db0f84489bb (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 12:38:51 np0005596060 bash[73889]: 80f51b5beb505d2585de8a3ff8de447ceee34cb69a7f869014996db0f84489bb
Jan 26 12:38:51 np0005596060 systemd[1]: Started Ceph mon.compute-0 for d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: pidfile_write: ignore empty --pid-file
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: load: jerasure load: lrc 
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: RocksDB version: 7.9.2
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Git sha 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: DB SUMMARY
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: DB Session ID:  CNQU40CPSBGGYTN4CESE
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: CURRENT file:  CURRENT
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: IDENTITY file:  IDENTITY
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                         Options.error_if_exists: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                       Options.create_if_missing: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                         Options.paranoid_checks: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                                     Options.env: 0x562407ae6c40
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                                Options.info_log: 0x56240a244ec0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                Options.max_file_opening_threads: 16
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                              Options.statistics: (nil)
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                               Options.use_fsync: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                       Options.max_log_file_size: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                         Options.allow_fallocate: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                        Options.use_direct_reads: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:          Options.create_missing_column_families: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                              Options.db_log_dir: 
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                                 Options.wal_dir: 
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                   Options.advise_random_on_open: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                    Options.write_buffer_manager: 0x56240a254b40
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                            Options.rate_limiter: (nil)
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                  Options.unordered_write: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                               Options.row_cache: None
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                              Options.wal_filter: None
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.allow_ingest_behind: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.two_write_queues: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.manual_wal_flush: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.wal_compression: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.atomic_flush: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                 Options.log_readahead_size: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.allow_data_in_errors: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.db_host_id: __hostname__
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.max_background_jobs: 2
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.max_background_compactions: -1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.max_subcompactions: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.max_total_wal_size: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                          Options.max_open_files: -1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                          Options.bytes_per_sync: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:       Options.compaction_readahead_size: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                  Options.max_background_flushes: -1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Compression algorithms supported:
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: #011kZSTD supported: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: #011kXpressCompression supported: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: #011kBZip2Compression supported: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: #011kLZ4Compression supported: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: #011kZlibCompression supported: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: #011kSnappyCompression supported: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:           Options.merge_operator: 
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:        Options.compaction_filter: None
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56240a244aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x56240a23d1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:        Options.write_buffer_size: 33554432
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:  Options.max_write_buffer_number: 2
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:          Options.compression: NoCompression
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.num_levels: 7
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a7008efc-af18-475b-8e6d-abf0122d49b8
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449131710985, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449131713459, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "CNQU40CPSBGGYTN4CESE", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449131713642, "job": 1, "event": "recovery_finished"}
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56240a266e00
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: DB pointer 0x56240a37a000
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x56240a23d1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid d4cd1917-5876-51b6-bc64-65a16199754d
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@-1(???) e0 preinit fsid d4cd1917-5876-51b6-bc64-65a16199754d
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2026-01-26T17:38:49.845564Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).mds e1 new map
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: log_channel(cluster) log [DBG] : fsmap 
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mkfs d4cd1917-5876-51b6-bc64-65a16199754d
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 26 12:38:51 np0005596060 podman[73910]: 2026-01-26 17:38:51.776697995 +0000 UTC m=+0.064234798 container create 1c2b5d8b9a024312dd174de686bf96d3aee5930610ee03c67ab5fc3f2d80f57c (image=quay.io/ceph/ceph:v18, name=quirky_williamson, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 26 12:38:51 np0005596060 ceph-mon[73909]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 12:38:51 np0005596060 systemd[1]: Started libpod-conmon-1c2b5d8b9a024312dd174de686bf96d3aee5930610ee03c67ab5fc3f2d80f57c.scope.
Jan 26 12:38:51 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:51 np0005596060 podman[73910]: 2026-01-26 17:38:51.753520748 +0000 UTC m=+0.041057591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec432db5b65b5109c032d7c79262bf26e88f88d3ee6e480e1cae33847fa24e50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec432db5b65b5109c032d7c79262bf26e88f88d3ee6e480e1cae33847fa24e50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec432db5b65b5109c032d7c79262bf26e88f88d3ee6e480e1cae33847fa24e50/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:51 np0005596060 podman[73910]: 2026-01-26 17:38:51.873431574 +0000 UTC m=+0.160968397 container init 1c2b5d8b9a024312dd174de686bf96d3aee5930610ee03c67ab5fc3f2d80f57c (image=quay.io/ceph/ceph:v18, name=quirky_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 12:38:51 np0005596060 podman[73910]: 2026-01-26 17:38:51.887991113 +0000 UTC m=+0.175527896 container start 1c2b5d8b9a024312dd174de686bf96d3aee5930610ee03c67ab5fc3f2d80f57c (image=quay.io/ceph/ceph:v18, name=quirky_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 12:38:51 np0005596060 podman[73910]: 2026-01-26 17:38:51.892980759 +0000 UTC m=+0.180517582 container attach 1c2b5d8b9a024312dd174de686bf96d3aee5930610ee03c67ab5fc3f2d80f57c (image=quay.io/ceph/ceph:v18, name=quirky_williamson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:38:52 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 26 12:38:52 np0005596060 ceph-mon[73909]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139308393' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:  cluster:
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:    id:     d4cd1917-5876-51b6-bc64-65a16199754d
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:    health: HEALTH_OK
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]: 
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:  services:
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:    mon: 1 daemons, quorum compute-0 (age 0.550805s)
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:    mgr: no daemons active
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:    osd: 0 osds: 0 up, 0 in
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]: 
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:  data:
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:    pools:   0 pools, 0 pgs
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:    objects: 0 objects, 0 B
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:    usage:   0 B used, 0 B / 0 B avail
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]:    pgs:     
Jan 26 12:38:52 np0005596060 quirky_williamson[73964]: 
Jan 26 12:38:52 np0005596060 systemd[1]: libpod-1c2b5d8b9a024312dd174de686bf96d3aee5930610ee03c67ab5fc3f2d80f57c.scope: Deactivated successfully.
Jan 26 12:38:52 np0005596060 podman[73910]: 2026-01-26 17:38:52.317445977 +0000 UTC m=+0.604982780 container died 1c2b5d8b9a024312dd174de686bf96d3aee5930610ee03c67ab5fc3f2d80f57c (image=quay.io/ceph/ceph:v18, name=quirky_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:38:52 np0005596060 podman[73910]: 2026-01-26 17:38:52.362494707 +0000 UTC m=+0.650031500 container remove 1c2b5d8b9a024312dd174de686bf96d3aee5930610ee03c67ab5fc3f2d80f57c (image=quay.io/ceph/ceph:v18, name=quirky_williamson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:38:52 np0005596060 systemd[1]: libpod-conmon-1c2b5d8b9a024312dd174de686bf96d3aee5930610ee03c67ab5fc3f2d80f57c.scope: Deactivated successfully.
Jan 26 12:38:52 np0005596060 podman[74005]: 2026-01-26 17:38:52.436810309 +0000 UTC m=+0.050265744 container create c59a9e25ff42bf6aace91efdbdc542dc49af06b4a9f93d8af49327cc1af236b4 (image=quay.io/ceph/ceph:v18, name=elated_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:38:52 np0005596060 systemd[1]: Started libpod-conmon-c59a9e25ff42bf6aace91efdbdc542dc49af06b4a9f93d8af49327cc1af236b4.scope.
Jan 26 12:38:52 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234a5da4b2c8f3e5451357f0f945e7ba88163202b060786b16a4cbbd9650cc22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234a5da4b2c8f3e5451357f0f945e7ba88163202b060786b16a4cbbd9650cc22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234a5da4b2c8f3e5451357f0f945e7ba88163202b060786b16a4cbbd9650cc22/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/234a5da4b2c8f3e5451357f0f945e7ba88163202b060786b16a4cbbd9650cc22/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:52 np0005596060 podman[74005]: 2026-01-26 17:38:52.41552657 +0000 UTC m=+0.028982085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:52 np0005596060 podman[74005]: 2026-01-26 17:38:52.510110905 +0000 UTC m=+0.123566320 container init c59a9e25ff42bf6aace91efdbdc542dc49af06b4a9f93d8af49327cc1af236b4 (image=quay.io/ceph/ceph:v18, name=elated_raman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 12:38:52 np0005596060 podman[74005]: 2026-01-26 17:38:52.517739158 +0000 UTC m=+0.131194563 container start c59a9e25ff42bf6aace91efdbdc542dc49af06b4a9f93d8af49327cc1af236b4 (image=quay.io/ceph/ceph:v18, name=elated_raman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 12:38:52 np0005596060 podman[74005]: 2026-01-26 17:38:52.530582573 +0000 UTC m=+0.144038028 container attach c59a9e25ff42bf6aace91efdbdc542dc49af06b4a9f93d8af49327cc1af236b4 (image=quay.io/ceph/ceph:v18, name=elated_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 12:38:52 np0005596060 ceph-mon[73909]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 12:38:52 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 26 12:38:52 np0005596060 ceph-mon[73909]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4113566098' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 12:38:52 np0005596060 ceph-mon[73909]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4113566098' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 26 12:38:52 np0005596060 elated_raman[74021]: 
Jan 26 12:38:52 np0005596060 elated_raman[74021]: [global]
Jan 26 12:38:52 np0005596060 elated_raman[74021]: #011fsid = d4cd1917-5876-51b6-bc64-65a16199754d
Jan 26 12:38:52 np0005596060 elated_raman[74021]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 26 12:38:52 np0005596060 systemd[1]: libpod-c59a9e25ff42bf6aace91efdbdc542dc49af06b4a9f93d8af49327cc1af236b4.scope: Deactivated successfully.
Jan 26 12:38:52 np0005596060 podman[74005]: 2026-01-26 17:38:52.930443468 +0000 UTC m=+0.543898863 container died c59a9e25ff42bf6aace91efdbdc542dc49af06b4a9f93d8af49327cc1af236b4 (image=quay.io/ceph/ceph:v18, name=elated_raman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 12:38:52 np0005596060 systemd[1]: var-lib-containers-storage-overlay-234a5da4b2c8f3e5451357f0f945e7ba88163202b060786b16a4cbbd9650cc22-merged.mount: Deactivated successfully.
Jan 26 12:38:52 np0005596060 podman[74005]: 2026-01-26 17:38:52.967912717 +0000 UTC m=+0.581368112 container remove c59a9e25ff42bf6aace91efdbdc542dc49af06b4a9f93d8af49327cc1af236b4 (image=quay.io/ceph/ceph:v18, name=elated_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:38:52 np0005596060 systemd[1]: libpod-conmon-c59a9e25ff42bf6aace91efdbdc542dc49af06b4a9f93d8af49327cc1af236b4.scope: Deactivated successfully.
Jan 26 12:38:53 np0005596060 podman[74059]: 2026-01-26 17:38:53.023583126 +0000 UTC m=+0.038703631 container create 6df29712ad4ae6b10d1528b233bb59601f6d494ab62aeeb2fa2ebc55e99fca6f (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 26 12:38:53 np0005596060 systemd[1]: Started libpod-conmon-6df29712ad4ae6b10d1528b233bb59601f6d494ab62aeeb2fa2ebc55e99fca6f.scope.
Jan 26 12:38:53 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76187dbdd2c921131417f5e31f8ab3b12e6a8ee7cb396e7c6ce56c0ed3c98a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76187dbdd2c921131417f5e31f8ab3b12e6a8ee7cb396e7c6ce56c0ed3c98a1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76187dbdd2c921131417f5e31f8ab3b12e6a8ee7cb396e7c6ce56c0ed3c98a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76187dbdd2c921131417f5e31f8ab3b12e6a8ee7cb396e7c6ce56c0ed3c98a1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:53 np0005596060 podman[74059]: 2026-01-26 17:38:53.007629132 +0000 UTC m=+0.022749637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:53 np0005596060 podman[74059]: 2026-01-26 17:38:53.127566849 +0000 UTC m=+0.142687374 container init 6df29712ad4ae6b10d1528b233bb59601f6d494ab62aeeb2fa2ebc55e99fca6f (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:38:53 np0005596060 podman[74059]: 2026-01-26 17:38:53.141355018 +0000 UTC m=+0.156475553 container start 6df29712ad4ae6b10d1528b233bb59601f6d494ab62aeeb2fa2ebc55e99fca6f (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 12:38:53 np0005596060 podman[74059]: 2026-01-26 17:38:53.145335179 +0000 UTC m=+0.160455694 container attach 6df29712ad4ae6b10d1528b233bb59601f6d494ab62aeeb2fa2ebc55e99fca6f (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:38:53 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:38:53 np0005596060 ceph-mon[73909]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2685214305' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:38:53 np0005596060 systemd[1]: libpod-6df29712ad4ae6b10d1528b233bb59601f6d494ab62aeeb2fa2ebc55e99fca6f.scope: Deactivated successfully.
Jan 26 12:38:53 np0005596060 podman[74059]: 2026-01-26 17:38:53.604956837 +0000 UTC m=+0.620077342 container died 6df29712ad4ae6b10d1528b233bb59601f6d494ab62aeeb2fa2ebc55e99fca6f (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:38:53 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c76187dbdd2c921131417f5e31f8ab3b12e6a8ee7cb396e7c6ce56c0ed3c98a1-merged.mount: Deactivated successfully.
Jan 26 12:38:53 np0005596060 podman[74059]: 2026-01-26 17:38:53.658553824 +0000 UTC m=+0.673674379 container remove 6df29712ad4ae6b10d1528b233bb59601f6d494ab62aeeb2fa2ebc55e99fca6f (image=quay.io/ceph/ceph:v18, name=epic_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 12:38:53 np0005596060 systemd[1]: libpod-conmon-6df29712ad4ae6b10d1528b233bb59601f6d494ab62aeeb2fa2ebc55e99fca6f.scope: Deactivated successfully.
Jan 26 12:38:53 np0005596060 systemd[1]: Stopping Ceph mon.compute-0 for d4cd1917-5876-51b6-bc64-65a16199754d...
Jan 26 12:38:53 np0005596060 ceph-mon[73909]: from='client.? 192.168.122.100:0/4113566098' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 12:38:53 np0005596060 ceph-mon[73909]: from='client.? 192.168.122.100:0/4113566098' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 26 12:38:53 np0005596060 ceph-mon[73909]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 26 12:38:53 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 26 12:38:53 np0005596060 ceph-mon[73909]: mon.compute-0@0(leader) e1 shutdown
Jan 26 12:38:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0[73905]: 2026-01-26T17:38:53.986+0000 7f281458f640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 26 12:38:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0[73905]: 2026-01-26T17:38:53.986+0000 7f281458f640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 26 12:38:53 np0005596060 ceph-mon[73909]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 26 12:38:53 np0005596060 ceph-mon[73909]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 26 12:38:54 np0005596060 podman[74145]: 2026-01-26 17:38:54.048024776 +0000 UTC m=+0.119692062 container died 80f51b5beb505d2585de8a3ff8de447ceee34cb69a7f869014996db0f84489bb (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:38:54 np0005596060 systemd[1]: var-lib-containers-storage-overlay-822442bf8eb4cff7e27f82073ccb78f34bab2517271e81c88aedf34ef8210d37-merged.mount: Deactivated successfully.
Jan 26 12:38:54 np0005596060 podman[74145]: 2026-01-26 17:38:54.094991435 +0000 UTC m=+0.166658751 container remove 80f51b5beb505d2585de8a3ff8de447ceee34cb69a7f869014996db0f84489bb (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 12:38:54 np0005596060 bash[74145]: ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0
Jan 26 12:38:54 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:54 np0005596060 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 26 12:38:54 np0005596060 systemd[1]: ceph-d4cd1917-5876-51b6-bc64-65a16199754d@mon.compute-0.service: Deactivated successfully.
Jan 26 12:38:54 np0005596060 systemd[1]: Stopped Ceph mon.compute-0 for d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:38:54 np0005596060 systemd[1]: ceph-d4cd1917-5876-51b6-bc64-65a16199754d@mon.compute-0.service: Consumed 1.207s CPU time.
Jan 26 12:38:54 np0005596060 systemd[1]: Starting Ceph mon.compute-0 for d4cd1917-5876-51b6-bc64-65a16199754d...
Jan 26 12:38:54 np0005596060 podman[74248]: 2026-01-26 17:38:54.524974942 +0000 UTC m=+0.055982338 container create ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 12:38:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e27342424e7b7d60dce2f9cdc6b2b7af2cab47829efab81f940344e799628eef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e27342424e7b7d60dce2f9cdc6b2b7af2cab47829efab81f940344e799628eef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e27342424e7b7d60dce2f9cdc6b2b7af2cab47829efab81f940344e799628eef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e27342424e7b7d60dce2f9cdc6b2b7af2cab47829efab81f940344e799628eef/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:54 np0005596060 podman[74248]: 2026-01-26 17:38:54.495411654 +0000 UTC m=+0.026419150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:54 np0005596060 podman[74248]: 2026-01-26 17:38:54.600780032 +0000 UTC m=+0.131787458 container init ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:38:54 np0005596060 podman[74248]: 2026-01-26 17:38:54.6106055 +0000 UTC m=+0.141612896 container start ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:38:54 np0005596060 bash[74248]: ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c
Jan 26 12:38:54 np0005596060 systemd[1]: Started Ceph mon.compute-0 for d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: pidfile_write: ignore empty --pid-file
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: load: jerasure load: lrc 
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: RocksDB version: 7.9.2
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Git sha 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: DB SUMMARY
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: DB Session ID:  RT38KYIRSGE9064E0SIL
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: CURRENT file:  CURRENT
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: IDENTITY file:  IDENTITY
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55210 ; 
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                         Options.error_if_exists: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                       Options.create_if_missing: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                         Options.paranoid_checks: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                                     Options.env: 0x56529171fc40
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                                Options.info_log: 0x565293729040
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                Options.max_file_opening_threads: 16
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                              Options.statistics: (nil)
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                               Options.use_fsync: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                       Options.max_log_file_size: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                         Options.allow_fallocate: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                        Options.use_direct_reads: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:          Options.create_missing_column_families: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                              Options.db_log_dir: 
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                                 Options.wal_dir: 
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                   Options.advise_random_on_open: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                    Options.write_buffer_manager: 0x565293738b40
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                            Options.rate_limiter: (nil)
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                  Options.unordered_write: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                               Options.row_cache: None
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                              Options.wal_filter: None
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.allow_ingest_behind: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.two_write_queues: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.manual_wal_flush: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.wal_compression: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.atomic_flush: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                 Options.log_readahead_size: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.allow_data_in_errors: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.db_host_id: __hostname__
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.max_background_jobs: 2
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.max_background_compactions: -1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.max_subcompactions: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.max_total_wal_size: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                          Options.max_open_files: -1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                          Options.bytes_per_sync: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:       Options.compaction_readahead_size: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                  Options.max_background_flushes: -1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Compression algorithms supported:
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: #011kZSTD supported: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: #011kXpressCompression supported: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: #011kBZip2Compression supported: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: #011kLZ4Compression supported: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: #011kZlibCompression supported: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: #011kSnappyCompression supported: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:           Options.merge_operator: 
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:        Options.compaction_filter: None
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x565293728c40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5652937211f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:        Options.write_buffer_size: 33554432
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:  Options.max_write_buffer_number: 2
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:          Options.compression: NoCompression
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.num_levels: 7
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a7008efc-af18-475b-8e6d-abf0122d49b8
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449134662374, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449134665319, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 136, "table_properties": {"data_size": 53385, "index_size": 170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2933, "raw_average_key_size": 29, "raw_value_size": 51027, "raw_average_value_size": 515, "num_data_blocks": 9, "num_entries": 99, "num_filter_entries": 99, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449134, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449134665471, "job": 1, "event": "recovery_finished"}
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56529374ae00
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: DB pointer 0x565293852000
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.46 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0   55.46 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 4.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 4.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5652937211f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid d4cd1917-5876-51b6-bc64-65a16199754d
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@-1(???) e1 preinit fsid d4cd1917-5876-51b6-bc64-65a16199754d
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@-1(???).mds e1 new map
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap 
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 26 12:38:54 np0005596060 podman[74268]: 2026-01-26 17:38:54.694355601 +0000 UTC m=+0.048353685 container create f4d1c73b5929ee49a275f79f0c4eee3ef4490df13fc2a0058a360cd0bb1f8689 (image=quay.io/ceph/ceph:v18, name=exciting_dhawan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 12:38:54 np0005596060 podman[74268]: 2026-01-26 17:38:54.677274218 +0000 UTC m=+0.031272312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:54 np0005596060 ceph-mon[74267]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 26 12:38:54 np0005596060 systemd[1]: Started libpod-conmon-f4d1c73b5929ee49a275f79f0c4eee3ef4490df13fc2a0058a360cd0bb1f8689.scope.
Jan 26 12:38:54 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b2ed6f76dd0803f96f28971b0d4082224cbf20b30137a5425bf2f155a3dc92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b2ed6f76dd0803f96f28971b0d4082224cbf20b30137a5425bf2f155a3dc92/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5b2ed6f76dd0803f96f28971b0d4082224cbf20b30137a5425bf2f155a3dc92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:54 np0005596060 podman[74268]: 2026-01-26 17:38:54.846603236 +0000 UTC m=+0.200601320 container init f4d1c73b5929ee49a275f79f0c4eee3ef4490df13fc2a0058a360cd0bb1f8689 (image=quay.io/ceph/ceph:v18, name=exciting_dhawan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 12:38:54 np0005596060 podman[74268]: 2026-01-26 17:38:54.854017144 +0000 UTC m=+0.208015228 container start f4d1c73b5929ee49a275f79f0c4eee3ef4490df13fc2a0058a360cd0bb1f8689 (image=quay.io/ceph/ceph:v18, name=exciting_dhawan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 12:38:54 np0005596060 podman[74268]: 2026-01-26 17:38:54.85784369 +0000 UTC m=+0.211841774 container attach f4d1c73b5929ee49a275f79f0c4eee3ef4490df13fc2a0058a360cd0bb1f8689 (image=quay.io/ceph/ceph:v18, name=exciting_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 12:38:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Jan 26 12:38:55 np0005596060 systemd[1]: libpod-f4d1c73b5929ee49a275f79f0c4eee3ef4490df13fc2a0058a360cd0bb1f8689.scope: Deactivated successfully.
Jan 26 12:38:55 np0005596060 podman[74268]: 2026-01-26 17:38:55.292558967 +0000 UTC m=+0.646557071 container died f4d1c73b5929ee49a275f79f0c4eee3ef4490df13fc2a0058a360cd0bb1f8689 (image=quay.io/ceph/ceph:v18, name=exciting_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 12:38:55 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a5b2ed6f76dd0803f96f28971b0d4082224cbf20b30137a5425bf2f155a3dc92-merged.mount: Deactivated successfully.
Jan 26 12:38:55 np0005596060 podman[74268]: 2026-01-26 17:38:55.332804276 +0000 UTC m=+0.686802360 container remove f4d1c73b5929ee49a275f79f0c4eee3ef4490df13fc2a0058a360cd0bb1f8689 (image=quay.io/ceph/ceph:v18, name=exciting_dhawan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:38:55 np0005596060 systemd[1]: libpod-conmon-f4d1c73b5929ee49a275f79f0c4eee3ef4490df13fc2a0058a360cd0bb1f8689.scope: Deactivated successfully.
Jan 26 12:38:55 np0005596060 podman[74360]: 2026-01-26 17:38:55.393476702 +0000 UTC m=+0.041726678 container create 2e247c982decd0a6ec93923dbb9ef7b1f7cd2fe2ae9746e956c4f815c865bcd4 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:38:55 np0005596060 systemd[1]: Started libpod-conmon-2e247c982decd0a6ec93923dbb9ef7b1f7cd2fe2ae9746e956c4f815c865bcd4.scope.
Jan 26 12:38:55 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c814ef62955d332b76d1abd0325b5774943617583b05d0c3b8f9f4fead20bb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c814ef62955d332b76d1abd0325b5774943617583b05d0c3b8f9f4fead20bb1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c814ef62955d332b76d1abd0325b5774943617583b05d0c3b8f9f4fead20bb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:55 np0005596060 podman[74360]: 2026-01-26 17:38:55.373204539 +0000 UTC m=+0.021454555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:55 np0005596060 podman[74360]: 2026-01-26 17:38:55.477981152 +0000 UTC m=+0.126231148 container init 2e247c982decd0a6ec93923dbb9ef7b1f7cd2fe2ae9746e956c4f815c865bcd4 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:38:55 np0005596060 podman[74360]: 2026-01-26 17:38:55.484836785 +0000 UTC m=+0.133086791 container start 2e247c982decd0a6ec93923dbb9ef7b1f7cd2fe2ae9746e956c4f815c865bcd4 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 12:38:55 np0005596060 podman[74360]: 2026-01-26 17:38:55.488649862 +0000 UTC m=+0.136899888 container attach 2e247c982decd0a6ec93923dbb9ef7b1f7cd2fe2ae9746e956c4f815c865bcd4 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 12:38:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Jan 26 12:38:55 np0005596060 systemd[1]: libpod-2e247c982decd0a6ec93923dbb9ef7b1f7cd2fe2ae9746e956c4f815c865bcd4.scope: Deactivated successfully.
Jan 26 12:38:55 np0005596060 podman[74402]: 2026-01-26 17:38:55.974110904 +0000 UTC m=+0.033811547 container died 2e247c982decd0a6ec93923dbb9ef7b1f7cd2fe2ae9746e956c4f815c865bcd4 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:38:55 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3c814ef62955d332b76d1abd0325b5774943617583b05d0c3b8f9f4fead20bb1-merged.mount: Deactivated successfully.
Jan 26 12:38:56 np0005596060 podman[74402]: 2026-01-26 17:38:56.012406943 +0000 UTC m=+0.072107577 container remove 2e247c982decd0a6ec93923dbb9ef7b1f7cd2fe2ae9746e956c4f815c865bcd4 (image=quay.io/ceph/ceph:v18, name=gallant_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:38:56 np0005596060 systemd[1]: libpod-conmon-2e247c982decd0a6ec93923dbb9ef7b1f7cd2fe2ae9746e956c4f815c865bcd4.scope: Deactivated successfully.
Jan 26 12:38:56 np0005596060 systemd[1]: Reloading.
Jan 26 12:38:56 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:38:56 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:38:56 np0005596060 systemd[1]: Reloading.
Jan 26 12:38:56 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:38:56 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:38:56 np0005596060 systemd[1]: Starting Ceph mgr.compute-0.mbryrf for d4cd1917-5876-51b6-bc64-65a16199754d...
Jan 26 12:38:56 np0005596060 podman[74543]: 2026-01-26 17:38:56.775712141 +0000 UTC m=+0.046982841 container create c9380c6bab6fe3d6503a27a8330588148f2a4409d1cf980f8c32bc8409e0485b (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 12:38:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab18adc336bbf025394f6142167b6920a78c6b8bf950551265df577933b11f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab18adc336bbf025394f6142167b6920a78c6b8bf950551265df577933b11f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab18adc336bbf025394f6142167b6920a78c6b8bf950551265df577933b11f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab18adc336bbf025394f6142167b6920a78c6b8bf950551265df577933b11f7/merged/var/lib/ceph/mgr/ceph-compute-0.mbryrf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:56 np0005596060 podman[74543]: 2026-01-26 17:38:56.834386946 +0000 UTC m=+0.105657716 container init c9380c6bab6fe3d6503a27a8330588148f2a4409d1cf980f8c32bc8409e0485b (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 12:38:56 np0005596060 podman[74543]: 2026-01-26 17:38:56.841830245 +0000 UTC m=+0.113100945 container start c9380c6bab6fe3d6503a27a8330588148f2a4409d1cf980f8c32bc8409e0485b (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:38:56 np0005596060 bash[74543]: c9380c6bab6fe3d6503a27a8330588148f2a4409d1cf980f8c32bc8409e0485b
Jan 26 12:38:56 np0005596060 podman[74543]: 2026-01-26 17:38:56.752869792 +0000 UTC m=+0.024140512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:56 np0005596060 systemd[1]: Started Ceph mgr.compute-0.mbryrf for d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:38:56 np0005596060 ceph-mgr[74563]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 12:38:56 np0005596060 ceph-mgr[74563]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 26 12:38:56 np0005596060 ceph-mgr[74563]: pidfile_write: ignore empty --pid-file
Jan 26 12:38:56 np0005596060 podman[74564]: 2026-01-26 17:38:56.920308992 +0000 UTC m=+0.044216761 container create 2bd3a5355f4860b943e393a13c2b05ea9d173bab896e2fca11371aefab5fabc6 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:38:56 np0005596060 systemd[1]: Started libpod-conmon-2bd3a5355f4860b943e393a13c2b05ea9d173bab896e2fca11371aefab5fabc6.scope.
Jan 26 12:38:56 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'alerts'
Jan 26 12:38:56 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce998f39783521e39d53a1a955808c2ef558c48cd76f3e98db1708e90fdc1b15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce998f39783521e39d53a1a955808c2ef558c48cd76f3e98db1708e90fdc1b15/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce998f39783521e39d53a1a955808c2ef558c48cd76f3e98db1708e90fdc1b15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:56 np0005596060 podman[74564]: 2026-01-26 17:38:56.900781947 +0000 UTC m=+0.024689746 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:57 np0005596060 podman[74564]: 2026-01-26 17:38:57.012460485 +0000 UTC m=+0.136368284 container init 2bd3a5355f4860b943e393a13c2b05ea9d173bab896e2fca11371aefab5fabc6 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:38:57 np0005596060 podman[74564]: 2026-01-26 17:38:57.01896745 +0000 UTC m=+0.142875219 container start 2bd3a5355f4860b943e393a13c2b05ea9d173bab896e2fca11371aefab5fabc6 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 12:38:57 np0005596060 podman[74564]: 2026-01-26 17:38:57.023135395 +0000 UTC m=+0.147043244 container attach 2bd3a5355f4860b943e393a13c2b05ea9d173bab896e2fca11371aefab5fabc6 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 12:38:57 np0005596060 ceph-mgr[74563]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 12:38:57 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'balancer'
Jan 26 12:38:57 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:38:57.274+0000 7f59fe007140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 12:38:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 26 12:38:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630992025' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]: 
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]: {
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "health": {
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "status": "HEALTH_OK",
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "checks": {},
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "mutes": []
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    },
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "election_epoch": 5,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "quorum": [
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        0
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    ],
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "quorum_names": [
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "compute-0"
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    ],
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "quorum_age": 2,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "monmap": {
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "epoch": 1,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "min_mon_release_name": "reef",
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "num_mons": 1
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    },
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "osdmap": {
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "epoch": 1,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "num_osds": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "num_up_osds": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "osd_up_since": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "num_in_osds": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "osd_in_since": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "num_remapped_pgs": 0
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    },
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "pgmap": {
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "pgs_by_state": [],
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "num_pgs": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "num_pools": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "num_objects": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "data_bytes": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "bytes_used": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "bytes_avail": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "bytes_total": 0
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    },
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "fsmap": {
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "epoch": 1,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "by_rank": [],
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "up:standby": 0
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    },
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "mgrmap": {
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "available": false,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "num_standbys": 0,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "modules": [
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:            "iostat",
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:            "nfs",
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:            "restful"
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        ],
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "services": {}
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    },
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "servicemap": {
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "epoch": 1,
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "modified": "2026-01-26T17:38:51.755695+0000",
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:        "services": {}
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    },
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]:    "progress_events": {}
Jan 26 12:38:57 np0005596060 suspicious_turing[74604]: }
Jan 26 12:38:57 np0005596060 systemd[1]: libpod-2bd3a5355f4860b943e393a13c2b05ea9d173bab896e2fca11371aefab5fabc6.scope: Deactivated successfully.
Jan 26 12:38:57 np0005596060 podman[74564]: 2026-01-26 17:38:57.427964146 +0000 UTC m=+0.551871935 container died 2bd3a5355f4860b943e393a13c2b05ea9d173bab896e2fca11371aefab5fabc6 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 12:38:57 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ce998f39783521e39d53a1a955808c2ef558c48cd76f3e98db1708e90fdc1b15-merged.mount: Deactivated successfully.
Jan 26 12:38:57 np0005596060 podman[74564]: 2026-01-26 17:38:57.474003882 +0000 UTC m=+0.597911651 container remove 2bd3a5355f4860b943e393a13c2b05ea9d173bab896e2fca11371aefab5fabc6 (image=quay.io/ceph/ceph:v18, name=suspicious_turing, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 12:38:57 np0005596060 systemd[1]: libpod-conmon-2bd3a5355f4860b943e393a13c2b05ea9d173bab896e2fca11371aefab5fabc6.scope: Deactivated successfully.
Jan 26 12:38:57 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:38:57.525+0000 7f59fe007140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 12:38:57 np0005596060 ceph-mgr[74563]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 12:38:57 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'cephadm'
Jan 26 12:38:59 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'crash'
Jan 26 12:38:59 np0005596060 podman[74652]: 2026-01-26 17:38:59.555852564 +0000 UTC m=+0.048999482 container create c540e0175956b67cf2326861fc5086404e1fcf89b3f8e92bdd5c19ad3c67f252 (image=quay.io/ceph/ceph:v18, name=youthful_easley, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:38:59 np0005596060 systemd[1]: Started libpod-conmon-c540e0175956b67cf2326861fc5086404e1fcf89b3f8e92bdd5c19ad3c67f252.scope.
Jan 26 12:38:59 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:38:59 np0005596060 podman[74652]: 2026-01-26 17:38:59.535896099 +0000 UTC m=+0.029043037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:38:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27921e5cfdf2f1fe7771ffe04f629aaa269fd6072c662cb9594630d91953df/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27921e5cfdf2f1fe7771ffe04f629aaa269fd6072c662cb9594630d91953df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe27921e5cfdf2f1fe7771ffe04f629aaa269fd6072c662cb9594630d91953df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:38:59 np0005596060 podman[74652]: 2026-01-26 17:38:59.651718801 +0000 UTC m=+0.144865739 container init c540e0175956b67cf2326861fc5086404e1fcf89b3f8e92bdd5c19ad3c67f252 (image=quay.io/ceph/ceph:v18, name=youthful_easley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 12:38:59 np0005596060 podman[74652]: 2026-01-26 17:38:59.657369414 +0000 UTC m=+0.150516332 container start c540e0175956b67cf2326861fc5086404e1fcf89b3f8e92bdd5c19ad3c67f252 (image=quay.io/ceph/ceph:v18, name=youthful_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:38:59 np0005596060 podman[74652]: 2026-01-26 17:38:59.660399761 +0000 UTC m=+0.153546679 container attach c540e0175956b67cf2326861fc5086404e1fcf89b3f8e92bdd5c19ad3c67f252 (image=quay.io/ceph/ceph:v18, name=youthful_easley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 12:38:59 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:38:59.765+0000 7f59fe007140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 12:38:59 np0005596060 ceph-mgr[74563]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 12:38:59 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'dashboard'
Jan 26 12:39:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 26 12:39:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3279866592' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 12:39:00 np0005596060 youthful_easley[74668]: 
Jan 26 12:39:00 np0005596060 youthful_easley[74668]: {
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "health": {
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "status": "HEALTH_OK",
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "checks": {},
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "mutes": []
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    },
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "election_epoch": 5,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "quorum": [
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        0
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    ],
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "quorum_names": [
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "compute-0"
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    ],
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "quorum_age": 5,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "monmap": {
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "epoch": 1,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "min_mon_release_name": "reef",
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "num_mons": 1
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    },
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "osdmap": {
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "epoch": 1,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "num_osds": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "num_up_osds": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "osd_up_since": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "num_in_osds": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "osd_in_since": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "num_remapped_pgs": 0
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    },
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "pgmap": {
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "pgs_by_state": [],
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "num_pgs": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "num_pools": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "num_objects": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "data_bytes": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "bytes_used": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "bytes_avail": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "bytes_total": 0
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    },
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "fsmap": {
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "epoch": 1,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "by_rank": [],
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "up:standby": 0
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    },
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "mgrmap": {
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "available": false,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "num_standbys": 0,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "modules": [
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:            "iostat",
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:            "nfs",
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:            "restful"
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        ],
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "services": {}
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    },
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "servicemap": {
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "epoch": 1,
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "modified": "2026-01-26T17:38:51.755695+0000",
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:        "services": {}
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    },
Jan 26 12:39:00 np0005596060 youthful_easley[74668]:    "progress_events": {}
Jan 26 12:39:00 np0005596060 youthful_easley[74668]: }
Jan 26 12:39:00 np0005596060 systemd[1]: libpod-c540e0175956b67cf2326861fc5086404e1fcf89b3f8e92bdd5c19ad3c67f252.scope: Deactivated successfully.
Jan 26 12:39:00 np0005596060 podman[74652]: 2026-01-26 17:39:00.075011939 +0000 UTC m=+0.568158857 container died c540e0175956b67cf2326861fc5086404e1fcf89b3f8e92bdd5c19ad3c67f252 (image=quay.io/ceph/ceph:v18, name=youthful_easley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:39:00 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fe27921e5cfdf2f1fe7771ffe04f629aaa269fd6072c662cb9594630d91953df-merged.mount: Deactivated successfully.
Jan 26 12:39:00 np0005596060 podman[74652]: 2026-01-26 17:39:00.205698558 +0000 UTC m=+0.698845496 container remove c540e0175956b67cf2326861fc5086404e1fcf89b3f8e92bdd5c19ad3c67f252 (image=quay.io/ceph/ceph:v18, name=youthful_easley, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 12:39:00 np0005596060 systemd[1]: libpod-conmon-c540e0175956b67cf2326861fc5086404e1fcf89b3f8e92bdd5c19ad3c67f252.scope: Deactivated successfully.
Jan 26 12:39:01 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'devicehealth'
Jan 26 12:39:01 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:01.581+0000 7f59fe007140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 12:39:01 np0005596060 ceph-mgr[74563]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 12:39:01 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'diskprediction_local'
Jan 26 12:39:02 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 26 12:39:02 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 26 12:39:02 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]:  from numpy import show_config as show_numpy_config
Jan 26 12:39:02 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:02.117+0000 7f59fe007140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 12:39:02 np0005596060 ceph-mgr[74563]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 12:39:02 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'influx'
Jan 26 12:39:02 np0005596060 podman[74706]: 2026-01-26 17:39:02.27446534 +0000 UTC m=+0.045915213 container create d4e1b3d2a13c4fe3cb53b61c2945f936c0c29a75998ccfa948e6a692279f2219 (image=quay.io/ceph/ceph:v18, name=dreamy_gauss, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:02 np0005596060 systemd[1]: Started libpod-conmon-d4e1b3d2a13c4fe3cb53b61c2945f936c0c29a75998ccfa948e6a692279f2219.scope.
Jan 26 12:39:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecba257a874327fc88fc0af090085a0aa57688eab32ebe220bbe28b117da8892/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecba257a874327fc88fc0af090085a0aa57688eab32ebe220bbe28b117da8892/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecba257a874327fc88fc0af090085a0aa57688eab32ebe220bbe28b117da8892/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:02 np0005596060 podman[74706]: 2026-01-26 17:39:02.252529265 +0000 UTC m=+0.023979168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:02 np0005596060 podman[74706]: 2026-01-26 17:39:02.351416909 +0000 UTC m=+0.122866782 container init d4e1b3d2a13c4fe3cb53b61c2945f936c0c29a75998ccfa948e6a692279f2219 (image=quay.io/ceph/ceph:v18, name=dreamy_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:39:02 np0005596060 podman[74706]: 2026-01-26 17:39:02.356100937 +0000 UTC m=+0.127550810 container start d4e1b3d2a13c4fe3cb53b61c2945f936c0c29a75998ccfa948e6a692279f2219 (image=quay.io/ceph/ceph:v18, name=dreamy_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:02 np0005596060 podman[74706]: 2026-01-26 17:39:02.35937257 +0000 UTC m=+0.130822473 container attach d4e1b3d2a13c4fe3cb53b61c2945f936c0c29a75998ccfa948e6a692279f2219 (image=quay.io/ceph/ceph:v18, name=dreamy_gauss, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 12:39:02 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:02.383+0000 7f59fe007140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 12:39:02 np0005596060 ceph-mgr[74563]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 12:39:02 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'insights'
Jan 26 12:39:02 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'iostat'
Jan 26 12:39:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 26 12:39:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858411464' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]: 
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]: {
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "health": {
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "status": "HEALTH_OK",
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "checks": {},
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "mutes": []
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    },
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "election_epoch": 5,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "quorum": [
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        0
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    ],
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "quorum_names": [
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "compute-0"
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    ],
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "quorum_age": 8,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "monmap": {
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "epoch": 1,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "min_mon_release_name": "reef",
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "num_mons": 1
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    },
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "osdmap": {
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "epoch": 1,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "num_osds": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "num_up_osds": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "osd_up_since": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "num_in_osds": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "osd_in_since": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "num_remapped_pgs": 0
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    },
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "pgmap": {
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "pgs_by_state": [],
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "num_pgs": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "num_pools": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "num_objects": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "data_bytes": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "bytes_used": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "bytes_avail": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "bytes_total": 0
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    },
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "fsmap": {
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "epoch": 1,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "by_rank": [],
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "up:standby": 0
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    },
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "mgrmap": {
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "available": false,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "num_standbys": 0,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "modules": [
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:            "iostat",
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:            "nfs",
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:            "restful"
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        ],
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "services": {}
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    },
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "servicemap": {
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "epoch": 1,
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "modified": "2026-01-26T17:38:51.755695+0000",
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:        "services": {}
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    },
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]:    "progress_events": {}
Jan 26 12:39:02 np0005596060 dreamy_gauss[74723]: }
Jan 26 12:39:02 np0005596060 systemd[1]: libpod-d4e1b3d2a13c4fe3cb53b61c2945f936c0c29a75998ccfa948e6a692279f2219.scope: Deactivated successfully.
Jan 26 12:39:02 np0005596060 podman[74749]: 2026-01-26 17:39:02.853932952 +0000 UTC m=+0.023025474 container died d4e1b3d2a13c4fe3cb53b61c2945f936c0c29a75998ccfa948e6a692279f2219 (image=quay.io/ceph/ceph:v18, name=dreamy_gauss, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ecba257a874327fc88fc0af090085a0aa57688eab32ebe220bbe28b117da8892-merged.mount: Deactivated successfully.
Jan 26 12:39:02 np0005596060 podman[74749]: 2026-01-26 17:39:02.893244137 +0000 UTC m=+0.062336659 container remove d4e1b3d2a13c4fe3cb53b61c2945f936c0c29a75998ccfa948e6a692279f2219 (image=quay.io/ceph/ceph:v18, name=dreamy_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:39:02 np0005596060 systemd[1]: libpod-conmon-d4e1b3d2a13c4fe3cb53b61c2945f936c0c29a75998ccfa948e6a692279f2219.scope: Deactivated successfully.
Jan 26 12:39:02 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:02.928+0000 7f59fe007140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 12:39:02 np0005596060 ceph-mgr[74563]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 12:39:02 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'k8sevents'
Jan 26 12:39:04 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'localpool'
Jan 26 12:39:04 np0005596060 podman[74764]: 2026-01-26 17:39:04.974321391 +0000 UTC m=+0.050719185 container create 347765e089107afa605d6f1f8e65c45e5e910ef05a93db761e2353239ab4f2c6 (image=quay.io/ceph/ceph:v18, name=pensive_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:39:05 np0005596060 systemd[1]: Started libpod-conmon-347765e089107afa605d6f1f8e65c45e5e910ef05a93db761e2353239ab4f2c6.scope.
Jan 26 12:39:05 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'mds_autoscaler'
Jan 26 12:39:05 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e8d6dc803ce2f48a1159d5b761f6085a6cf271f7f4d9dd8293dcb7abc6a0c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e8d6dc803ce2f48a1159d5b761f6085a6cf271f7f4d9dd8293dcb7abc6a0c0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93e8d6dc803ce2f48a1159d5b761f6085a6cf271f7f4d9dd8293dcb7abc6a0c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:05 np0005596060 podman[74764]: 2026-01-26 17:39:05.045732779 +0000 UTC m=+0.122130623 container init 347765e089107afa605d6f1f8e65c45e5e910ef05a93db761e2353239ab4f2c6 (image=quay.io/ceph/ceph:v18, name=pensive_jemison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:39:05 np0005596060 podman[74764]: 2026-01-26 17:39:04.951011261 +0000 UTC m=+0.027409105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:05 np0005596060 podman[74764]: 2026-01-26 17:39:05.060600615 +0000 UTC m=+0.136998439 container start 347765e089107afa605d6f1f8e65c45e5e910ef05a93db761e2353239ab4f2c6 (image=quay.io/ceph/ceph:v18, name=pensive_jemison, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:05 np0005596060 podman[74764]: 2026-01-26 17:39:05.065306495 +0000 UTC m=+0.141704319 container attach 347765e089107afa605d6f1f8e65c45e5e910ef05a93db761e2353239ab4f2c6 (image=quay.io/ceph/ceph:v18, name=pensive_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:39:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 26 12:39:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2110757903' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]: 
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]: {
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "health": {
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "status": "HEALTH_OK",
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "checks": {},
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "mutes": []
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    },
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "election_epoch": 5,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "quorum": [
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        0
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    ],
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "quorum_names": [
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "compute-0"
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    ],
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "quorum_age": 10,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "monmap": {
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "epoch": 1,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "min_mon_release_name": "reef",
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "num_mons": 1
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    },
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "osdmap": {
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "epoch": 1,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "num_osds": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "num_up_osds": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "osd_up_since": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "num_in_osds": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "osd_in_since": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "num_remapped_pgs": 0
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    },
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "pgmap": {
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "pgs_by_state": [],
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "num_pgs": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "num_pools": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "num_objects": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "data_bytes": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "bytes_used": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "bytes_avail": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "bytes_total": 0
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    },
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "fsmap": {
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "epoch": 1,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "by_rank": [],
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "up:standby": 0
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    },
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "mgrmap": {
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "available": false,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "num_standbys": 0,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "modules": [
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:            "iostat",
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:            "nfs",
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:            "restful"
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        ],
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "services": {}
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    },
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "servicemap": {
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "epoch": 1,
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "modified": "2026-01-26T17:38:51.755695+0000",
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:        "services": {}
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    },
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]:    "progress_events": {}
Jan 26 12:39:05 np0005596060 pensive_jemison[74781]: }
Jan 26 12:39:05 np0005596060 systemd[1]: libpod-347765e089107afa605d6f1f8e65c45e5e910ef05a93db761e2353239ab4f2c6.scope: Deactivated successfully.
Jan 26 12:39:05 np0005596060 podman[74764]: 2026-01-26 17:39:05.478117327 +0000 UTC m=+0.554515131 container died 347765e089107afa605d6f1f8e65c45e5e910ef05a93db761e2353239ab4f2c6 (image=quay.io/ceph/ceph:v18, name=pensive_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:39:05 np0005596060 systemd[1]: var-lib-containers-storage-overlay-93e8d6dc803ce2f48a1159d5b761f6085a6cf271f7f4d9dd8293dcb7abc6a0c0-merged.mount: Deactivated successfully.
Jan 26 12:39:05 np0005596060 podman[74764]: 2026-01-26 17:39:05.514768255 +0000 UTC m=+0.591166049 container remove 347765e089107afa605d6f1f8e65c45e5e910ef05a93db761e2353239ab4f2c6 (image=quay.io/ceph/ceph:v18, name=pensive_jemison, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:39:05 np0005596060 systemd[1]: libpod-conmon-347765e089107afa605d6f1f8e65c45e5e910ef05a93db761e2353239ab4f2c6.scope: Deactivated successfully.
Jan 26 12:39:05 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'mirroring'
Jan 26 12:39:06 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'nfs'
Jan 26 12:39:06 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:06.776+0000 7f59fe007140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 12:39:06 np0005596060 ceph-mgr[74563]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 12:39:06 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'orchestrator'
Jan 26 12:39:07 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:07.452+0000 7f59fe007140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 12:39:07 np0005596060 ceph-mgr[74563]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 12:39:07 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'osd_perf_query'
Jan 26 12:39:07 np0005596060 podman[74821]: 2026-01-26 17:39:07.588516593 +0000 UTC m=+0.046114649 container create a9bdb27558d085d951cae294c11bb0c32d516f8835f43fdc0327c177d5906430 (image=quay.io/ceph/ceph:v18, name=charming_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:07 np0005596060 systemd[1]: Started libpod-conmon-a9bdb27558d085d951cae294c11bb0c32d516f8835f43fdc0327c177d5906430.scope.
Jan 26 12:39:07 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:07 np0005596060 podman[74821]: 2026-01-26 17:39:07.568566917 +0000 UTC m=+0.026165003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8363fbe8578e32be88cb9a3d321dbd6f95198fa8c738eb92d6a563bd81dbd8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8363fbe8578e32be88cb9a3d321dbd6f95198fa8c738eb92d6a563bd81dbd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f8363fbe8578e32be88cb9a3d321dbd6f95198fa8c738eb92d6a563bd81dbd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:07 np0005596060 podman[74821]: 2026-01-26 17:39:07.688457173 +0000 UTC m=+0.146055269 container init a9bdb27558d085d951cae294c11bb0c32d516f8835f43fdc0327c177d5906430 (image=quay.io/ceph/ceph:v18, name=charming_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 12:39:07 np0005596060 podman[74821]: 2026-01-26 17:39:07.69542852 +0000 UTC m=+0.153026576 container start a9bdb27558d085d951cae294c11bb0c32d516f8835f43fdc0327c177d5906430 (image=quay.io/ceph/ceph:v18, name=charming_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:07 np0005596060 podman[74821]: 2026-01-26 17:39:07.699356329 +0000 UTC m=+0.156954395 container attach a9bdb27558d085d951cae294c11bb0c32d516f8835f43fdc0327c177d5906430 (image=quay.io/ceph/ceph:v18, name=charming_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 12:39:07 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:07.722+0000 7f59fe007140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 12:39:07 np0005596060 ceph-mgr[74563]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 12:39:07 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'osd_support'
Jan 26 12:39:07 np0005596060 ceph-mgr[74563]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 12:39:07 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'pg_autoscaler'
Jan 26 12:39:07 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:07.956+0000 7f59fe007140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 12:39:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 26 12:39:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816661451' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 12:39:08 np0005596060 charming_keller[74837]: 
Jan 26 12:39:08 np0005596060 charming_keller[74837]: {
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "health": {
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "status": "HEALTH_OK",
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "checks": {},
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "mutes": []
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    },
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "election_epoch": 5,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "quorum": [
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        0
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    ],
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "quorum_names": [
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "compute-0"
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    ],
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "quorum_age": 13,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "monmap": {
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "epoch": 1,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "min_mon_release_name": "reef",
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "num_mons": 1
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    },
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "osdmap": {
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "epoch": 1,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "num_osds": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "num_up_osds": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "osd_up_since": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "num_in_osds": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "osd_in_since": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "num_remapped_pgs": 0
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    },
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "pgmap": {
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "pgs_by_state": [],
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "num_pgs": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "num_pools": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "num_objects": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "data_bytes": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "bytes_used": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "bytes_avail": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "bytes_total": 0
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    },
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "fsmap": {
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "epoch": 1,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "by_rank": [],
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "up:standby": 0
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    },
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "mgrmap": {
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "available": false,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "num_standbys": 0,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "modules": [
Jan 26 12:39:08 np0005596060 charming_keller[74837]:            "iostat",
Jan 26 12:39:08 np0005596060 charming_keller[74837]:            "nfs",
Jan 26 12:39:08 np0005596060 charming_keller[74837]:            "restful"
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        ],
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "services": {}
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    },
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "servicemap": {
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "epoch": 1,
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "modified": "2026-01-26T17:38:51.755695+0000",
Jan 26 12:39:08 np0005596060 charming_keller[74837]:        "services": {}
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    },
Jan 26 12:39:08 np0005596060 charming_keller[74837]:    "progress_events": {}
Jan 26 12:39:08 np0005596060 charming_keller[74837]: }
Jan 26 12:39:08 np0005596060 systemd[1]: libpod-a9bdb27558d085d951cae294c11bb0c32d516f8835f43fdc0327c177d5906430.scope: Deactivated successfully.
Jan 26 12:39:08 np0005596060 podman[74821]: 2026-01-26 17:39:08.193712526 +0000 UTC m=+0.651310582 container died a9bdb27558d085d951cae294c11bb0c32d516f8835f43fdc0327c177d5906430 (image=quay.io/ceph/ceph:v18, name=charming_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 12:39:08 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6f8363fbe8578e32be88cb9a3d321dbd6f95198fa8c738eb92d6a563bd81dbd8-merged.mount: Deactivated successfully.
Jan 26 12:39:08 np0005596060 podman[74821]: 2026-01-26 17:39:08.232705714 +0000 UTC m=+0.690303770 container remove a9bdb27558d085d951cae294c11bb0c32d516f8835f43fdc0327c177d5906430 (image=quay.io/ceph/ceph:v18, name=charming_keller, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:39:08 np0005596060 ceph-mgr[74563]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 12:39:08 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:08.239+0000 7f59fe007140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 12:39:08 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'progress'
Jan 26 12:39:08 np0005596060 systemd[1]: libpod-conmon-a9bdb27558d085d951cae294c11bb0c32d516f8835f43fdc0327c177d5906430.scope: Deactivated successfully.
Jan 26 12:39:08 np0005596060 ceph-mgr[74563]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 12:39:08 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:08.482+0000 7f59fe007140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 12:39:08 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'prometheus'
Jan 26 12:39:09 np0005596060 ceph-mgr[74563]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 12:39:09 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:09.549+0000 7f59fe007140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 12:39:09 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'rbd_support'
Jan 26 12:39:09 np0005596060 ceph-mgr[74563]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 12:39:09 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:09.858+0000 7f59fe007140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 12:39:09 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'restful'
Jan 26 12:39:10 np0005596060 podman[74876]: 2026-01-26 17:39:10.328034197 +0000 UTC m=+0.060651787 container create 6c3e8fa728025f34a4c313f0b08db9a8fe8f0efd2a57f29bc17b23880041bdc3 (image=quay.io/ceph/ceph:v18, name=boring_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:10 np0005596060 systemd[1]: Started libpod-conmon-6c3e8fa728025f34a4c313f0b08db9a8fe8f0efd2a57f29bc17b23880041bdc3.scope.
Jan 26 12:39:10 np0005596060 podman[74876]: 2026-01-26 17:39:10.296983611 +0000 UTC m=+0.029601231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:10 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd11995c9c06208bbcd2a8515e4dde0d4fe62f10ef2ffda957517b7bc48a5935/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd11995c9c06208bbcd2a8515e4dde0d4fe62f10ef2ffda957517b7bc48a5935/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd11995c9c06208bbcd2a8515e4dde0d4fe62f10ef2ffda957517b7bc48a5935/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:10 np0005596060 podman[74876]: 2026-01-26 17:39:10.411966042 +0000 UTC m=+0.144583672 container init 6c3e8fa728025f34a4c313f0b08db9a8fe8f0efd2a57f29bc17b23880041bdc3 (image=quay.io/ceph/ceph:v18, name=boring_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 12:39:10 np0005596060 podman[74876]: 2026-01-26 17:39:10.41740377 +0000 UTC m=+0.150021370 container start 6c3e8fa728025f34a4c313f0b08db9a8fe8f0efd2a57f29bc17b23880041bdc3 (image=quay.io/ceph/ceph:v18, name=boring_goldstine, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 12:39:10 np0005596060 podman[74876]: 2026-01-26 17:39:10.421917524 +0000 UTC m=+0.154535164 container attach 6c3e8fa728025f34a4c313f0b08db9a8fe8f0efd2a57f29bc17b23880041bdc3 (image=quay.io/ceph/ceph:v18, name=boring_goldstine, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:10 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'rgw'
Jan 26 12:39:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 26 12:39:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2336015517' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]: 
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]: {
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "health": {
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "status": "HEALTH_OK",
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "checks": {},
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "mutes": []
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    },
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "election_epoch": 5,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "quorum": [
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        0
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    ],
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "quorum_names": [
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "compute-0"
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    ],
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "quorum_age": 16,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "monmap": {
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "epoch": 1,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "min_mon_release_name": "reef",
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "num_mons": 1
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    },
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "osdmap": {
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "epoch": 1,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "num_osds": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "num_up_osds": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "osd_up_since": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "num_in_osds": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "osd_in_since": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "num_remapped_pgs": 0
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    },
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "pgmap": {
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "pgs_by_state": [],
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "num_pgs": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "num_pools": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "num_objects": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "data_bytes": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "bytes_used": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "bytes_avail": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "bytes_total": 0
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    },
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "fsmap": {
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "epoch": 1,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "by_rank": [],
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "up:standby": 0
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    },
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "mgrmap": {
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "available": false,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "num_standbys": 0,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "modules": [
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:            "iostat",
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:            "nfs",
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:            "restful"
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        ],
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "services": {}
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    },
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "servicemap": {
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "epoch": 1,
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "modified": "2026-01-26T17:38:51.755695+0000",
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:        "services": {}
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    },
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]:    "progress_events": {}
Jan 26 12:39:10 np0005596060 boring_goldstine[74892]: }
Jan 26 12:39:10 np0005596060 systemd[1]: libpod-6c3e8fa728025f34a4c313f0b08db9a8fe8f0efd2a57f29bc17b23880041bdc3.scope: Deactivated successfully.
Jan 26 12:39:10 np0005596060 podman[74876]: 2026-01-26 17:39:10.87671019 +0000 UTC m=+0.609327790 container died 6c3e8fa728025f34a4c313f0b08db9a8fe8f0efd2a57f29bc17b23880041bdc3 (image=quay.io/ceph/ceph:v18, name=boring_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 26 12:39:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fd11995c9c06208bbcd2a8515e4dde0d4fe62f10ef2ffda957517b7bc48a5935-merged.mount: Deactivated successfully.
Jan 26 12:39:11 np0005596060 podman[74876]: 2026-01-26 17:39:11.099410988 +0000 UTC m=+0.832028588 container remove 6c3e8fa728025f34a4c313f0b08db9a8fe8f0efd2a57f29bc17b23880041bdc3 (image=quay.io/ceph/ceph:v18, name=boring_goldstine, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:11 np0005596060 systemd[1]: libpod-conmon-6c3e8fa728025f34a4c313f0b08db9a8fe8f0efd2a57f29bc17b23880041bdc3.scope: Deactivated successfully.
Jan 26 12:39:11 np0005596060 ceph-mgr[74563]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 12:39:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:11.420+0000 7f59fe007140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 12:39:11 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'rook'
Jan 26 12:39:13 np0005596060 podman[74932]: 2026-01-26 17:39:13.189568932 +0000 UTC m=+0.067294944 container create b315fca2874ba8d67dd22882504f519e7b41002fe490088eeee991729da3f14d (image=quay.io/ceph/ceph:v18, name=happy_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:39:13 np0005596060 podman[74932]: 2026-01-26 17:39:13.146272385 +0000 UTC m=+0.023998497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:13 np0005596060 systemd[1]: Started libpod-conmon-b315fca2874ba8d67dd22882504f519e7b41002fe490088eeee991729da3f14d.scope.
Jan 26 12:39:13 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d97a715bd0191f01c6793f499a5debcaccffd07b63cf23fceddf189e0276ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d97a715bd0191f01c6793f499a5debcaccffd07b63cf23fceddf189e0276ba/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d97a715bd0191f01c6793f499a5debcaccffd07b63cf23fceddf189e0276ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:13 np0005596060 podman[74932]: 2026-01-26 17:39:13.323048301 +0000 UTC m=+0.200774414 container init b315fca2874ba8d67dd22882504f519e7b41002fe490088eeee991729da3f14d (image=quay.io/ceph/ceph:v18, name=happy_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 26 12:39:13 np0005596060 podman[74932]: 2026-01-26 17:39:13.329856424 +0000 UTC m=+0.207582456 container start b315fca2874ba8d67dd22882504f519e7b41002fe490088eeee991729da3f14d (image=quay.io/ceph/ceph:v18, name=happy_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 12:39:13 np0005596060 podman[74932]: 2026-01-26 17:39:13.367138248 +0000 UTC m=+0.244864320 container attach b315fca2874ba8d67dd22882504f519e7b41002fe490088eeee991729da3f14d (image=quay.io/ceph/ceph:v18, name=happy_saha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 12:39:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 26 12:39:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/578977245' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 12:39:13 np0005596060 happy_saha[74948]: 
Jan 26 12:39:13 np0005596060 happy_saha[74948]: {
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "health": {
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "status": "HEALTH_OK",
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "checks": {},
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "mutes": []
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    },
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "election_epoch": 5,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "quorum": [
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        0
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    ],
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "quorum_names": [
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "compute-0"
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    ],
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "quorum_age": 19,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "monmap": {
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "epoch": 1,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "min_mon_release_name": "reef",
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "num_mons": 1
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    },
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "osdmap": {
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "epoch": 1,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "num_osds": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "num_up_osds": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "osd_up_since": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "num_in_osds": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "osd_in_since": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "num_remapped_pgs": 0
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    },
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "pgmap": {
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "pgs_by_state": [],
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "num_pgs": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "num_pools": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "num_objects": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "data_bytes": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "bytes_used": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "bytes_avail": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "bytes_total": 0
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    },
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "fsmap": {
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "epoch": 1,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "by_rank": [],
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "up:standby": 0
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    },
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "mgrmap": {
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "available": false,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "num_standbys": 0,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "modules": [
Jan 26 12:39:13 np0005596060 happy_saha[74948]:            "iostat",
Jan 26 12:39:13 np0005596060 happy_saha[74948]:            "nfs",
Jan 26 12:39:13 np0005596060 happy_saha[74948]:            "restful"
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        ],
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "services": {}
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    },
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "servicemap": {
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "epoch": 1,
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "modified": "2026-01-26T17:38:51.755695+0000",
Jan 26 12:39:13 np0005596060 happy_saha[74948]:        "services": {}
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    },
Jan 26 12:39:13 np0005596060 happy_saha[74948]:    "progress_events": {}
Jan 26 12:39:13 np0005596060 happy_saha[74948]: }
Jan 26 12:39:13 np0005596060 ceph-mgr[74563]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 12:39:13 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'selftest'
Jan 26 12:39:13 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:13.739+0000 7f59fe007140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 12:39:13 np0005596060 systemd[1]: libpod-b315fca2874ba8d67dd22882504f519e7b41002fe490088eeee991729da3f14d.scope: Deactivated successfully.
Jan 26 12:39:13 np0005596060 podman[74932]: 2026-01-26 17:39:13.743716533 +0000 UTC m=+0.621442545 container died b315fca2874ba8d67dd22882504f519e7b41002fe490088eeee991729da3f14d (image=quay.io/ceph/ceph:v18, name=happy_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 12:39:13 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b9d97a715bd0191f01c6793f499a5debcaccffd07b63cf23fceddf189e0276ba-merged.mount: Deactivated successfully.
Jan 26 12:39:13 np0005596060 podman[74932]: 2026-01-26 17:39:13.806027691 +0000 UTC m=+0.683753703 container remove b315fca2874ba8d67dd22882504f519e7b41002fe490088eeee991729da3f14d (image=quay.io/ceph/ceph:v18, name=happy_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:39:13 np0005596060 systemd[1]: libpod-conmon-b315fca2874ba8d67dd22882504f519e7b41002fe490088eeee991729da3f14d.scope: Deactivated successfully.
Jan 26 12:39:13 np0005596060 ceph-mgr[74563]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 12:39:13 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'snap_schedule'
Jan 26 12:39:13 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:13.989+0000 7f59fe007140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 12:39:14 np0005596060 ceph-mgr[74563]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 12:39:14 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'stats'
Jan 26 12:39:14 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:14.226+0000 7f59fe007140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 12:39:14 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'status'
Jan 26 12:39:14 np0005596060 ceph-mgr[74563]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 12:39:14 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'telegraf'
Jan 26 12:39:14 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:14.748+0000 7f59fe007140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 12:39:15 np0005596060 ceph-mgr[74563]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 12:39:15 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'telemetry'
Jan 26 12:39:15 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:15.012+0000 7f59fe007140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 12:39:15 np0005596060 ceph-mgr[74563]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 12:39:15 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'test_orchestrator'
Jan 26 12:39:15 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:15.653+0000 7f59fe007140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 12:39:15 np0005596060 podman[74985]: 2026-01-26 17:39:15.861615129 +0000 UTC m=+0.034319380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:16 np0005596060 ceph-mgr[74563]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 12:39:16 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'volumes'
Jan 26 12:39:16 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:16.383+0000 7f59fe007140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'zabbix'
Jan 26 12:39:17 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:17.183+0000 7f59fe007140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 12:39:17 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:17.423+0000 7f59fe007140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: ms_deliver_dispatch: unhandled message 0x557a3e766f20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mbryrf
Jan 26 12:39:17 np0005596060 podman[74985]: 2026-01-26 17:39:17.814453185 +0000 UTC m=+1.987157376 container create d9f090146bfd46259972aaa0fdee4ae2f5f38dc422a88a205d08938f57c5570d (image=quay.io/ceph/ceph:v18, name=sweet_wozniak, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.mbryrf(active, starting, since 0.391315s)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr handle_mgr_map Activating!
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr handle_mgr_map I am now activating
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e1 all = 1
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mbryrf", "id": "compute-0.mbryrf"} v 0) v1
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mbryrf", "id": "compute-0.mbryrf"}]: dispatch
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: balancer
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Manager daemon compute-0.mbryrf is now available
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [balancer INFO root] Starting
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: crash
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:39:17
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [balancer INFO root] No pools available
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: devicehealth
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] Starting
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: iostat
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: nfs
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: orchestrator
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: pg_autoscaler
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: progress
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [progress INFO root] Loading...
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [progress INFO root] No stored events to load
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [progress INFO root] Loaded [] historic events
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [progress INFO root] Loaded OSDMap, ready.
Jan 26 12:39:17 np0005596060 systemd[1]: Started libpod-conmon-d9f090146bfd46259972aaa0fdee4ae2f5f38dc422a88a205d08938f57c5570d.scope.
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: Activating manager daemon compute-0.mbryrf
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: Manager daemon compute-0.mbryrf is now available
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] recovery thread starting
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] starting setup
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: rbd_support
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: restful
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [restful INFO root] server_addr: :: server_port: 8003
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [restful WARNING root] server not running: no certificate configured
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: status
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: telemetry
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/mirror_snapshot_schedule"} v 0) v1
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/mirror_snapshot_schedule"}]: dispatch
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] PerfHandler: starting
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TaskHandler: starting
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/trash_purge_schedule"} v 0) v1
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/trash_purge_schedule"}]: dispatch
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:39:17 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] setup complete
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Jan 26 12:39:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/064a092e0c132c0a2159e25cbddc486c1494c73d607958aada34c900d4c4e828/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/064a092e0c132c0a2159e25cbddc486c1494c73d607958aada34c900d4c4e828/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/064a092e0c132c0a2159e25cbddc486c1494c73d607958aada34c900d4c4e828/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Jan 26 12:39:17 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: volumes
Jan 26 12:39:17 np0005596060 podman[74985]: 2026-01-26 17:39:17.901818467 +0000 UTC m=+2.074522658 container init d9f090146bfd46259972aaa0fdee4ae2f5f38dc422a88a205d08938f57c5570d (image=quay.io/ceph/ceph:v18, name=sweet_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 12:39:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:17 np0005596060 podman[74985]: 2026-01-26 17:39:17.907378858 +0000 UTC m=+2.080083089 container start d9f090146bfd46259972aaa0fdee4ae2f5f38dc422a88a205d08938f57c5570d (image=quay.io/ceph/ceph:v18, name=sweet_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:39:17 np0005596060 podman[74985]: 2026-01-26 17:39:17.911853291 +0000 UTC m=+2.084557472 container attach d9f090146bfd46259972aaa0fdee4ae2f5f38dc422a88a205d08938f57c5570d (image=quay.io/ceph/ceph:v18, name=sweet_wozniak, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 12:39:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 26 12:39:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/453575981' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]: 
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]: {
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "health": {
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "status": "HEALTH_OK",
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "checks": {},
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "mutes": []
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    },
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "election_epoch": 5,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "quorum": [
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        0
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    ],
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "quorum_names": [
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "compute-0"
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    ],
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "quorum_age": 23,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "monmap": {
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "epoch": 1,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "min_mon_release_name": "reef",
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "num_mons": 1
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    },
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "osdmap": {
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "epoch": 1,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "num_osds": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "num_up_osds": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "osd_up_since": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "num_in_osds": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "osd_in_since": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "num_remapped_pgs": 0
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    },
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "pgmap": {
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "pgs_by_state": [],
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "num_pgs": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "num_pools": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "num_objects": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "data_bytes": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "bytes_used": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "bytes_avail": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "bytes_total": 0
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    },
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "fsmap": {
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "epoch": 1,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "by_rank": [],
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "up:standby": 0
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    },
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "mgrmap": {
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "available": false,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "num_standbys": 0,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "modules": [
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:            "iostat",
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:            "nfs",
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:            "restful"
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        ],
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "services": {}
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    },
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "servicemap": {
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "epoch": 1,
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "modified": "2026-01-26T17:38:51.755695+0000",
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:        "services": {}
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    },
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]:    "progress_events": {}
Jan 26 12:39:18 np0005596060 sweet_wozniak[75033]: }
Jan 26 12:39:18 np0005596060 systemd[1]: libpod-d9f090146bfd46259972aaa0fdee4ae2f5f38dc422a88a205d08938f57c5570d.scope: Deactivated successfully.
Jan 26 12:39:18 np0005596060 podman[74985]: 2026-01-26 17:39:18.343615924 +0000 UTC m=+2.516320155 container died d9f090146bfd46259972aaa0fdee4ae2f5f38dc422a88a205d08938f57c5570d (image=quay.io/ceph/ceph:v18, name=sweet_wozniak, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 12:39:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-064a092e0c132c0a2159e25cbddc486c1494c73d607958aada34c900d4c4e828-merged.mount: Deactivated successfully.
Jan 26 12:39:18 np0005596060 podman[74985]: 2026-01-26 17:39:18.395789415 +0000 UTC m=+2.568493606 container remove d9f090146bfd46259972aaa0fdee4ae2f5f38dc422a88a205d08938f57c5570d (image=quay.io/ceph/ceph:v18, name=sweet_wozniak, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:39:18 np0005596060 systemd[1]: libpod-conmon-d9f090146bfd46259972aaa0fdee4ae2f5f38dc422a88a205d08938f57c5570d.scope: Deactivated successfully.
Jan 26 12:39:18 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.mbryrf(active, since 1.40466s)
Jan 26 12:39:18 np0005596060 ceph-mon[74267]: from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/mirror_snapshot_schedule"}]: dispatch
Jan 26 12:39:18 np0005596060 ceph-mon[74267]: from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/trash_purge_schedule"}]: dispatch
Jan 26 12:39:18 np0005596060 ceph-mon[74267]: from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:18 np0005596060 ceph-mon[74267]: from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:18 np0005596060 ceph-mon[74267]: from='mgr.14102 192.168.122.100:0/3878569388' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:19 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:39:19 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.mbryrf(active, since 2s)
Jan 26 12:39:20 np0005596060 podman[75117]: 2026-01-26 17:39:20.475026421 +0000 UTC m=+0.054090671 container create 35deb036e786579447649ffe743ffe15531b98d394254e2cc4c33755a1e9d33a (image=quay.io/ceph/ceph:v18, name=keen_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 12:39:20 np0005596060 systemd[1]: Started libpod-conmon-35deb036e786579447649ffe743ffe15531b98d394254e2cc4c33755a1e9d33a.scope.
Jan 26 12:39:20 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:20 np0005596060 podman[75117]: 2026-01-26 17:39:20.445900733 +0000 UTC m=+0.024965003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be59901f46f4b9eed594bf467a95fc9998d5a04cd428623bb7acbb3c5212faa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be59901f46f4b9eed594bf467a95fc9998d5a04cd428623bb7acbb3c5212faa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be59901f46f4b9eed594bf467a95fc9998d5a04cd428623bb7acbb3c5212faa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:20 np0005596060 podman[75117]: 2026-01-26 17:39:20.628795048 +0000 UTC m=+0.207859328 container init 35deb036e786579447649ffe743ffe15531b98d394254e2cc4c33755a1e9d33a (image=quay.io/ceph/ceph:v18, name=keen_bhaskara, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 12:39:20 np0005596060 podman[75117]: 2026-01-26 17:39:20.635429002 +0000 UTC m=+0.214493252 container start 35deb036e786579447649ffe743ffe15531b98d394254e2cc4c33755a1e9d33a (image=quay.io/ceph/ceph:v18, name=keen_bhaskara, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 12:39:20 np0005596060 podman[75117]: 2026-01-26 17:39:20.63996009 +0000 UTC m=+0.219024370 container attach 35deb036e786579447649ffe743ffe15531b98d394254e2cc4c33755a1e9d33a (image=quay.io/ceph/ceph:v18, name=keen_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:39:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 26 12:39:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/332483266' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]: 
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]: {
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "health": {
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "status": "HEALTH_OK",
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "checks": {},
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "mutes": []
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    },
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "election_epoch": 5,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "quorum": [
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        0
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    ],
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "quorum_names": [
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "compute-0"
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    ],
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "quorum_age": 26,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "monmap": {
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "epoch": 1,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "min_mon_release_name": "reef",
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "num_mons": 1
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    },
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "osdmap": {
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "epoch": 1,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "num_osds": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "num_up_osds": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "osd_up_since": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "num_in_osds": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "osd_in_since": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "num_remapped_pgs": 0
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    },
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "pgmap": {
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "pgs_by_state": [],
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "num_pgs": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "num_pools": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "num_objects": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "data_bytes": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "bytes_used": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "bytes_avail": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "bytes_total": 0
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    },
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "fsmap": {
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "epoch": 1,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "by_rank": [],
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "up:standby": 0
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    },
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "mgrmap": {
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "available": true,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "num_standbys": 0,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "modules": [
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:            "iostat",
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:            "nfs",
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:            "restful"
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        ],
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "services": {}
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    },
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "servicemap": {
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "epoch": 1,
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "modified": "2026-01-26T17:38:51.755695+0000",
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:        "services": {}
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    },
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]:    "progress_events": {}
Jan 26 12:39:21 np0005596060 keen_bhaskara[75133]: }
Jan 26 12:39:21 np0005596060 systemd[1]: libpod-35deb036e786579447649ffe743ffe15531b98d394254e2cc4c33755a1e9d33a.scope: Deactivated successfully.
Jan 26 12:39:21 np0005596060 podman[75159]: 2026-01-26 17:39:21.327373851 +0000 UTC m=+0.026141445 container died 35deb036e786579447649ffe743ffe15531b98d394254e2cc4c33755a1e9d33a (image=quay.io/ceph/ceph:v18, name=keen_bhaskara, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:21 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8be59901f46f4b9eed594bf467a95fc9998d5a04cd428623bb7acbb3c5212faa-merged.mount: Deactivated successfully.
Jan 26 12:39:21 np0005596060 podman[75159]: 2026-01-26 17:39:21.409093316 +0000 UTC m=+0.107860890 container remove 35deb036e786579447649ffe743ffe15531b98d394254e2cc4c33755a1e9d33a (image=quay.io/ceph/ceph:v18, name=keen_bhaskara, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:21 np0005596060 systemd[1]: libpod-conmon-35deb036e786579447649ffe743ffe15531b98d394254e2cc4c33755a1e9d33a.scope: Deactivated successfully.
Jan 26 12:39:21 np0005596060 podman[75174]: 2026-01-26 17:39:21.487990567 +0000 UTC m=+0.049699925 container create c6964bf92199ab92a81f99d1551d7a55f2d3d5f0baa069d8a09049268e98d745 (image=quay.io/ceph/ceph:v18, name=beautiful_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 12:39:21 np0005596060 systemd[1]: Started libpod-conmon-c6964bf92199ab92a81f99d1551d7a55f2d3d5f0baa069d8a09049268e98d745.scope.
Jan 26 12:39:21 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8969c1890f09c6331a7abe057eef77fae647f5772765403b6d85b3545f2422c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8969c1890f09c6331a7abe057eef77fae647f5772765403b6d85b3545f2422c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8969c1890f09c6331a7abe057eef77fae647f5772765403b6d85b3545f2422c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8969c1890f09c6331a7abe057eef77fae647f5772765403b6d85b3545f2422c/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:21 np0005596060 podman[75174]: 2026-01-26 17:39:21.558658984 +0000 UTC m=+0.120368332 container init c6964bf92199ab92a81f99d1551d7a55f2d3d5f0baa069d8a09049268e98d745 (image=quay.io/ceph/ceph:v18, name=beautiful_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 12:39:21 np0005596060 podman[75174]: 2026-01-26 17:39:21.463395709 +0000 UTC m=+0.025105057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:21 np0005596060 podman[75174]: 2026-01-26 17:39:21.564316493 +0000 UTC m=+0.126025811 container start c6964bf92199ab92a81f99d1551d7a55f2d3d5f0baa069d8a09049268e98d745 (image=quay.io/ceph/ceph:v18, name=beautiful_elion, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 12:39:21 np0005596060 podman[75174]: 2026-01-26 17:39:21.56804662 +0000 UTC m=+0.129755958 container attach c6964bf92199ab92a81f99d1551d7a55f2d3d5f0baa069d8a09049268e98d745 (image=quay.io/ceph/ceph:v18, name=beautiful_elion, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:39:21 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:39:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 26 12:39:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2579009671' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 12:39:22 np0005596060 systemd[1]: libpod-c6964bf92199ab92a81f99d1551d7a55f2d3d5f0baa069d8a09049268e98d745.scope: Deactivated successfully.
Jan 26 12:39:22 np0005596060 podman[75216]: 2026-01-26 17:39:22.166521581 +0000 UTC m=+0.028810114 container died c6964bf92199ab92a81f99d1551d7a55f2d3d5f0baa069d8a09049268e98d745 (image=quay.io/ceph/ceph:v18, name=beautiful_elion, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 12:39:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e8969c1890f09c6331a7abe057eef77fae647f5772765403b6d85b3545f2422c-merged.mount: Deactivated successfully.
Jan 26 12:39:22 np0005596060 podman[75216]: 2026-01-26 17:39:22.215621842 +0000 UTC m=+0.077910295 container remove c6964bf92199ab92a81f99d1551d7a55f2d3d5f0baa069d8a09049268e98d745 (image=quay.io/ceph/ceph:v18, name=beautiful_elion, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Jan 26 12:39:22 np0005596060 systemd[1]: libpod-conmon-c6964bf92199ab92a81f99d1551d7a55f2d3d5f0baa069d8a09049268e98d745.scope: Deactivated successfully.
Jan 26 12:39:22 np0005596060 podman[75231]: 2026-01-26 17:39:22.287776304 +0000 UTC m=+0.042986417 container create 2b64d89085afacdf7b920dca8155bb60b2fc61cab6d72a02d523790ec07d4005 (image=quay.io/ceph/ceph:v18, name=naughty_spence, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:22 np0005596060 systemd[1]: Started libpod-conmon-2b64d89085afacdf7b920dca8155bb60b2fc61cab6d72a02d523790ec07d4005.scope.
Jan 26 12:39:22 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d4bc992857a71773c3e7939dce62b28d0a34d688ab9010267752838b7786057/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d4bc992857a71773c3e7939dce62b28d0a34d688ab9010267752838b7786057/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d4bc992857a71773c3e7939dce62b28d0a34d688ab9010267752838b7786057/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:22 np0005596060 podman[75231]: 2026-01-26 17:39:22.353092274 +0000 UTC m=+0.108302407 container init 2b64d89085afacdf7b920dca8155bb60b2fc61cab6d72a02d523790ec07d4005 (image=quay.io/ceph/ceph:v18, name=naughty_spence, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:39:22 np0005596060 podman[75231]: 2026-01-26 17:39:22.268782943 +0000 UTC m=+0.023993086 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:22 np0005596060 podman[75231]: 2026-01-26 17:39:22.365939788 +0000 UTC m=+0.121149901 container start 2b64d89085afacdf7b920dca8155bb60b2fc61cab6d72a02d523790ec07d4005 (image=quay.io/ceph/ceph:v18, name=naughty_spence, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 12:39:22 np0005596060 podman[75231]: 2026-01-26 17:39:22.36951361 +0000 UTC m=+0.124723753 container attach 2b64d89085afacdf7b920dca8155bb60b2fc61cab6d72a02d523790ec07d4005 (image=quay.io/ceph/ceph:v18, name=naughty_spence, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Jan 26 12:39:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3788827018' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 26 12:39:23 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2579009671' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 12:39:23 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3788827018' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 26 12:39:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3788827018' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  1: '-n'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  2: 'mgr.compute-0.mbryrf'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  3: '-f'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  4: '--setuser'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  5: 'ceph'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  6: '--setgroup'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  7: 'ceph'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  8: '--default-log-to-file=false'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  9: '--default-log-to-journald=true'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr respawn  exe_path /proc/self/exe
Jan 26 12:39:23 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.mbryrf(active, since 5s)
Jan 26 12:39:23 np0005596060 systemd[1]: libpod-2b64d89085afacdf7b920dca8155bb60b2fc61cab6d72a02d523790ec07d4005.scope: Deactivated successfully.
Jan 26 12:39:23 np0005596060 podman[75231]: 2026-01-26 17:39:23.370713278 +0000 UTC m=+1.125923391 container died 2b64d89085afacdf7b920dca8155bb60b2fc61cab6d72a02d523790ec07d4005 (image=quay.io/ceph/ceph:v18, name=naughty_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 12:39:23 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8d4bc992857a71773c3e7939dce62b28d0a34d688ab9010267752838b7786057-merged.mount: Deactivated successfully.
Jan 26 12:39:23 np0005596060 podman[75231]: 2026-01-26 17:39:23.446475033 +0000 UTC m=+1.201685146 container remove 2b64d89085afacdf7b920dca8155bb60b2fc61cab6d72a02d523790ec07d4005 (image=quay.io/ceph/ceph:v18, name=naughty_spence, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:23 np0005596060 systemd[1]: libpod-conmon-2b64d89085afacdf7b920dca8155bb60b2fc61cab6d72a02d523790ec07d4005.scope: Deactivated successfully.
Jan 26 12:39:23 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: ignoring --setuser ceph since I am not root
Jan 26 12:39:23 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: ignoring --setgroup ceph since I am not root
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: pidfile_write: ignore empty --pid-file
Jan 26 12:39:23 np0005596060 podman[75286]: 2026-01-26 17:39:23.518116176 +0000 UTC m=+0.048065504 container create 09be74e80998fced25e730b2d7a7b24abdd56dcd1c1f4a9a72f29d250ba4d84c (image=quay.io/ceph/ceph:v18, name=optimistic_ganguly, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 12:39:23 np0005596060 systemd[1]: Started libpod-conmon-09be74e80998fced25e730b2d7a7b24abdd56dcd1c1f4a9a72f29d250ba4d84c.scope.
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'alerts'
Jan 26 12:39:23 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8093bcd2b0c71585a3112664d5a3d0a184028e975fa9a0fc3a862fcc7dc77a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8093bcd2b0c71585a3112664d5a3d0a184028e975fa9a0fc3a862fcc7dc77a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8093bcd2b0c71585a3112664d5a3d0a184028e975fa9a0fc3a862fcc7dc77a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:23 np0005596060 podman[75286]: 2026-01-26 17:39:23.497699073 +0000 UTC m=+0.027648411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:23 np0005596060 podman[75286]: 2026-01-26 17:39:23.596572421 +0000 UTC m=+0.126521779 container init 09be74e80998fced25e730b2d7a7b24abdd56dcd1c1f4a9a72f29d250ba4d84c (image=quay.io/ceph/ceph:v18, name=optimistic_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 12:39:23 np0005596060 podman[75286]: 2026-01-26 17:39:23.603249037 +0000 UTC m=+0.133198355 container start 09be74e80998fced25e730b2d7a7b24abdd56dcd1c1f4a9a72f29d250ba4d84c (image=quay.io/ceph/ceph:v18, name=optimistic_ganguly, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:39:23 np0005596060 podman[75286]: 2026-01-26 17:39:23.60710841 +0000 UTC m=+0.137057748 container attach 09be74e80998fced25e730b2d7a7b24abdd56dcd1c1f4a9a72f29d250ba4d84c (image=quay.io/ceph/ceph:v18, name=optimistic_ganguly, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 12:39:23 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:23.851+0000 7f4f419df140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 26 12:39:23 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'balancer'
Jan 26 12:39:24 np0005596060 ceph-mgr[74563]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 12:39:24 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:24.094+0000 7f4f419df140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 26 12:39:24 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'cephadm'
Jan 26 12:39:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 26 12:39:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/660576787' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 12:39:24 np0005596060 optimistic_ganguly[75326]: {
Jan 26 12:39:24 np0005596060 optimistic_ganguly[75326]:    "epoch": 5,
Jan 26 12:39:24 np0005596060 optimistic_ganguly[75326]:    "available": true,
Jan 26 12:39:24 np0005596060 optimistic_ganguly[75326]:    "active_name": "compute-0.mbryrf",
Jan 26 12:39:24 np0005596060 optimistic_ganguly[75326]:    "num_standby": 0
Jan 26 12:39:24 np0005596060 optimistic_ganguly[75326]: }
Jan 26 12:39:24 np0005596060 systemd[1]: libpod-09be74e80998fced25e730b2d7a7b24abdd56dcd1c1f4a9a72f29d250ba4d84c.scope: Deactivated successfully.
Jan 26 12:39:24 np0005596060 podman[75286]: 2026-01-26 17:39:24.26552498 +0000 UTC m=+0.795474308 container died 09be74e80998fced25e730b2d7a7b24abdd56dcd1c1f4a9a72f29d250ba4d84c (image=quay.io/ceph/ceph:v18, name=optimistic_ganguly, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Jan 26 12:39:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2a8093bcd2b0c71585a3112664d5a3d0a184028e975fa9a0fc3a862fcc7dc77a-merged.mount: Deactivated successfully.
Jan 26 12:39:24 np0005596060 podman[75286]: 2026-01-26 17:39:24.308908461 +0000 UTC m=+0.838857779 container remove 09be74e80998fced25e730b2d7a7b24abdd56dcd1c1f4a9a72f29d250ba4d84c (image=quay.io/ceph/ceph:v18, name=optimistic_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 12:39:24 np0005596060 systemd[1]: libpod-conmon-09be74e80998fced25e730b2d7a7b24abdd56dcd1c1f4a9a72f29d250ba4d84c.scope: Deactivated successfully.
Jan 26 12:39:24 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3788827018' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 26 12:39:24 np0005596060 podman[75366]: 2026-01-26 17:39:24.374853144 +0000 UTC m=+0.045851873 container create 8b194a9e3ea47cdc74406d95f97e77636982e4d5c24e50a71bf2a6201653c8ef (image=quay.io/ceph/ceph:v18, name=competent_nash, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 12:39:24 np0005596060 systemd[1]: Started libpod-conmon-8b194a9e3ea47cdc74406d95f97e77636982e4d5c24e50a71bf2a6201653c8ef.scope.
Jan 26 12:39:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca633aa47b20e22a3b44b45a5f5846777225741eddaf0da67a40df53e8d18a1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca633aa47b20e22a3b44b45a5f5846777225741eddaf0da67a40df53e8d18a1a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca633aa47b20e22a3b44b45a5f5846777225741eddaf0da67a40df53e8d18a1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:24 np0005596060 podman[75366]: 2026-01-26 17:39:24.351308015 +0000 UTC m=+0.022306744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:24 np0005596060 podman[75366]: 2026-01-26 17:39:24.456843819 +0000 UTC m=+0.127842558 container init 8b194a9e3ea47cdc74406d95f97e77636982e4d5c24e50a71bf2a6201653c8ef (image=quay.io/ceph/ceph:v18, name=competent_nash, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:39:24 np0005596060 podman[75366]: 2026-01-26 17:39:24.462731016 +0000 UTC m=+0.133729725 container start 8b194a9e3ea47cdc74406d95f97e77636982e4d5c24e50a71bf2a6201653c8ef (image=quay.io/ceph/ceph:v18, name=competent_nash, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 12:39:24 np0005596060 podman[75366]: 2026-01-26 17:39:24.466138642 +0000 UTC m=+0.137137391 container attach 8b194a9e3ea47cdc74406d95f97e77636982e4d5c24e50a71bf2a6201653c8ef (image=quay.io/ceph/ceph:v18, name=competent_nash, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:39:26 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'crash'
Jan 26 12:39:26 np0005596060 ceph-mgr[74563]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 12:39:26 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:26.387+0000 7f4f419df140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 26 12:39:26 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'dashboard'
Jan 26 12:39:27 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'devicehealth'
Jan 26 12:39:28 np0005596060 ceph-mgr[74563]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 12:39:28 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:28.086+0000 7f4f419df140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 26 12:39:28 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'diskprediction_local'
Jan 26 12:39:28 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 26 12:39:28 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 26 12:39:28 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]:  from numpy import show_config as show_numpy_config
Jan 26 12:39:28 np0005596060 ceph-mgr[74563]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 12:39:28 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:28.656+0000 7f4f419df140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 26 12:39:28 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'influx'
Jan 26 12:39:28 np0005596060 ceph-mgr[74563]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 12:39:28 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:28.922+0000 7f4f419df140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 26 12:39:28 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'insights'
Jan 26 12:39:29 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'iostat'
Jan 26 12:39:29 np0005596060 ceph-mgr[74563]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 12:39:29 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:29.411+0000 7f4f419df140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 26 12:39:29 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'k8sevents'
Jan 26 12:39:31 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'localpool'
Jan 26 12:39:31 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'mds_autoscaler'
Jan 26 12:39:32 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'mirroring'
Jan 26 12:39:32 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'nfs'
Jan 26 12:39:33 np0005596060 ceph-mgr[74563]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 12:39:33 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:33.090+0000 7f4f419df140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 26 12:39:33 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'orchestrator'
Jan 26 12:39:33 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:33.746+0000 7f4f419df140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 12:39:33 np0005596060 ceph-mgr[74563]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 26 12:39:33 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'osd_perf_query'
Jan 26 12:39:34 np0005596060 ceph-mgr[74563]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 12:39:34 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:34.004+0000 7f4f419df140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 26 12:39:34 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'osd_support'
Jan 26 12:39:34 np0005596060 ceph-mgr[74563]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 12:39:34 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'pg_autoscaler'
Jan 26 12:39:34 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:34.331+0000 7f4f419df140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 26 12:39:34 np0005596060 ceph-mgr[74563]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 12:39:34 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:34.629+0000 7f4f419df140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 26 12:39:34 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'progress'
Jan 26 12:39:34 np0005596060 ceph-mgr[74563]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 12:39:34 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:34.878+0000 7f4f419df140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 26 12:39:34 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'prometheus'
Jan 26 12:39:35 np0005596060 ceph-mgr[74563]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 12:39:35 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:35.890+0000 7f4f419df140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 26 12:39:35 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'rbd_support'
Jan 26 12:39:36 np0005596060 ceph-mgr[74563]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 12:39:36 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:36.196+0000 7f4f419df140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 26 12:39:36 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'restful'
Jan 26 12:39:36 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'rgw'
Jan 26 12:39:37 np0005596060 ceph-mgr[74563]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 12:39:37 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:37.646+0000 7f4f419df140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 26 12:39:37 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'rook'
Jan 26 12:39:40 np0005596060 ceph-mgr[74563]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 12:39:40 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:40.178+0000 7f4f419df140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 26 12:39:40 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'selftest'
Jan 26 12:39:40 np0005596060 ceph-mgr[74563]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 12:39:40 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:40.452+0000 7f4f419df140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 26 12:39:40 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'snap_schedule'
Jan 26 12:39:40 np0005596060 ceph-mgr[74563]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 12:39:40 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:40.713+0000 7f4f419df140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 26 12:39:40 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'stats'
Jan 26 12:39:40 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'status'
Jan 26 12:39:41 np0005596060 ceph-mgr[74563]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 12:39:41 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:41.256+0000 7f4f419df140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 26 12:39:41 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'telegraf'
Jan 26 12:39:41 np0005596060 ceph-mgr[74563]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 12:39:41 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:41.516+0000 7f4f419df140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 26 12:39:41 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'telemetry'
Jan 26 12:39:42 np0005596060 ceph-mgr[74563]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 12:39:42 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:42.194+0000 7f4f419df140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 26 12:39:42 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'test_orchestrator'
Jan 26 12:39:42 np0005596060 ceph-mgr[74563]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 12:39:42 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:42.926+0000 7f4f419df140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 26 12:39:42 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'volumes'
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 12:39:43 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:43.658+0000 7f4f419df140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: mgr[py] Loading python module 'zabbix'
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 12:39:43 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:39:43.902+0000 7f4f419df140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Active manager daemon compute-0.mbryrf restarted
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: ms_deliver_dispatch: unhandled message 0x5642ea2a2420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.mbryrf
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: mgr handle_mgr_map Activating!
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: mgr handle_mgr_map I am now activating
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.mbryrf(active, starting, since 0.0370488s)
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.mbryrf", "id": "compute-0.mbryrf"} v 0) v1
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mgr metadata", "who": "compute-0.mbryrf", "id": "compute-0.mbryrf"}]: dispatch
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e1 all = 1
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: balancer
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Starting
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Manager daemon compute-0.mbryrf is now available
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:39:43
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] No pools available
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: Active manager daemon compute-0.mbryrf restarted
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: Activating manager daemon compute-0.mbryrf
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: Manager daemon compute-0.mbryrf is now available
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 26 12:39:43 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 26 12:39:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: cephadm
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: crash
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: devicehealth
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: iostat
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] Starting
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: nfs
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: orchestrator
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: pg_autoscaler
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: progress
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [progress INFO root] Loading...
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [progress INFO root] No stored events to load
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [progress INFO root] Loaded [] historic events
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [progress INFO root] Loaded OSDMap, ready.
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] recovery thread starting
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] starting setup
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: rbd_support
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: restful
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [restful INFO root] server_addr: :: server_port: 8003
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: status
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: telemetry
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/mirror_snapshot_schedule"} v 0) v1
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/mirror_snapshot_schedule"}]: dispatch
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [restful WARNING root] server not running: no certificate configured
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] PerfHandler: starting
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TaskHandler: starting
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/trash_purge_schedule"} v 0) v1
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/trash_purge_schedule"}]: dispatch
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] setup complete
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: mgr load Constructed class from module: volumes
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019931015 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 26 12:39:44 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.mbryrf(active, since 1.0459s)
Jan 26 12:39:44 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 26 12:39:44 np0005596060 competent_nash[75382]: {
Jan 26 12:39:44 np0005596060 competent_nash[75382]:    "mgrmap_epoch": 7,
Jan 26 12:39:44 np0005596060 competent_nash[75382]:    "initialized": true
Jan 26 12:39:44 np0005596060 competent_nash[75382]: }
Jan 26 12:39:44 np0005596060 systemd[1]: libpod-8b194a9e3ea47cdc74406d95f97e77636982e4d5c24e50a71bf2a6201653c8ef.scope: Deactivated successfully.
Jan 26 12:39:44 np0005596060 podman[75366]: 2026-01-26 17:39:44.985733387 +0000 UTC m=+20.656732116 container died 8b194a9e3ea47cdc74406d95f97e77636982e4d5c24e50a71bf2a6201653c8ef (image=quay.io/ceph/ceph:v18, name=competent_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: Found migration_current of "None". Setting to last migration.
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/mirror_snapshot_schedule"}]: dispatch
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.mbryrf/trash_purge_schedule"}]: dispatch
Jan 26 12:39:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ca633aa47b20e22a3b44b45a5f5846777225741eddaf0da67a40df53e8d18a1a-merged.mount: Deactivated successfully.
Jan 26 12:39:45 np0005596060 podman[75366]: 2026-01-26 17:39:45.042146308 +0000 UTC m=+20.713145007 container remove 8b194a9e3ea47cdc74406d95f97e77636982e4d5c24e50a71bf2a6201653c8ef (image=quay.io/ceph/ceph:v18, name=competent_nash, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:39:45 np0005596060 systemd[1]: libpod-conmon-8b194a9e3ea47cdc74406d95f97e77636982e4d5c24e50a71bf2a6201653c8ef.scope: Deactivated successfully.
Jan 26 12:39:45 np0005596060 podman[75541]: 2026-01-26 17:39:45.191354763 +0000 UTC m=+0.114266567 container create d059d8ba7b916af9c6d794a08d4a66031d6966d0712f6ff53fc91e98e02b2b7f (image=quay.io/ceph/ceph:v18, name=laughing_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:39:45 np0005596060 podman[75541]: 2026-01-26 17:39:45.103500531 +0000 UTC m=+0.026412355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:45 np0005596060 systemd[1]: Started libpod-conmon-d059d8ba7b916af9c6d794a08d4a66031d6966d0712f6ff53fc91e98e02b2b7f.scope.
Jan 26 12:39:45 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1bf96c12d070309911caf69a5b379db161c86924c462c05d53fee5e652a4ad/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1bf96c12d070309911caf69a5b379db161c86924c462c05d53fee5e652a4ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab1bf96c12d070309911caf69a5b379db161c86924c462c05d53fee5e652a4ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:45 np0005596060 podman[75541]: 2026-01-26 17:39:45.29264146 +0000 UTC m=+0.215553264 container init d059d8ba7b916af9c6d794a08d4a66031d6966d0712f6ff53fc91e98e02b2b7f (image=quay.io/ceph/ceph:v18, name=laughing_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 26 12:39:45 np0005596060 podman[75541]: 2026-01-26 17:39:45.300056423 +0000 UTC m=+0.222968227 container start d059d8ba7b916af9c6d794a08d4a66031d6966d0712f6ff53fc91e98e02b2b7f (image=quay.io/ceph/ceph:v18, name=laughing_kapitsa, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:39:45 np0005596060 podman[75541]: 2026-01-26 17:39:45.304342341 +0000 UTC m=+0.227254155 container attach d059d8ba7b916af9c6d794a08d4a66031d6966d0712f6ff53fc91e98e02b2b7f (image=quay.io/ceph/ceph:v18, name=laughing_kapitsa, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:45 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 26 12:39:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 12:39:45 np0005596060 systemd[1]: libpod-d059d8ba7b916af9c6d794a08d4a66031d6966d0712f6ff53fc91e98e02b2b7f.scope: Deactivated successfully.
Jan 26 12:39:45 np0005596060 podman[75541]: 2026-01-26 17:39:45.886325062 +0000 UTC m=+0.809236916 container died d059d8ba7b916af9c6d794a08d4a66031d6966d0712f6ff53fc91e98e02b2b7f (image=quay.io/ceph/ceph:v18, name=laughing_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ab1bf96c12d070309911caf69a5b379db161c86924c462c05d53fee5e652a4ad-merged.mount: Deactivated successfully.
Jan 26 12:39:45 np0005596060 podman[75541]: 2026-01-26 17:39:45.927013053 +0000 UTC m=+0.849924857 container remove d059d8ba7b916af9c6d794a08d4a66031d6966d0712f6ff53fc91e98e02b2b7f (image=quay.io/ceph/ceph:v18, name=laughing_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:45 np0005596060 systemd[1]: libpod-conmon-d059d8ba7b916af9c6d794a08d4a66031d6966d0712f6ff53fc91e98e02b2b7f.scope: Deactivated successfully.
Jan 26 12:39:45 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:39:45 np0005596060 podman[75597]: 2026-01-26 17:39:45.988366447 +0000 UTC m=+0.038668888 container create 8357c0222712cb3a1f4c0441ed4a0ba3bf054ee788f8689010ee521c7ef1b7db (image=quay.io/ceph/ceph:v18, name=inspiring_germain, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:46 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:46 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:46 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:46 np0005596060 systemd[1]: Started libpod-conmon-8357c0222712cb3a1f4c0441ed4a0ba3bf054ee788f8689010ee521c7ef1b7db.scope.
Jan 26 12:39:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68306f469878eec8b6b613c22ec0b6920eee505e0ea3ea7fec6889db0a25457e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68306f469878eec8b6b613c22ec0b6920eee505e0ea3ea7fec6889db0a25457e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68306f469878eec8b6b613c22ec0b6920eee505e0ea3ea7fec6889db0a25457e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:46 np0005596060 podman[75597]: 2026-01-26 17:39:45.972765411 +0000 UTC m=+0.023067882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:46 np0005596060 podman[75597]: 2026-01-26 17:39:46.083711704 +0000 UTC m=+0.134014155 container init 8357c0222712cb3a1f4c0441ed4a0ba3bf054ee788f8689010ee521c7ef1b7db (image=quay.io/ceph/ceph:v18, name=inspiring_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:46 np0005596060 podman[75597]: 2026-01-26 17:39:46.090657451 +0000 UTC m=+0.140959902 container start 8357c0222712cb3a1f4c0441ed4a0ba3bf054ee788f8689010ee521c7ef1b7db (image=quay.io/ceph/ceph:v18, name=inspiring_germain, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 12:39:46 np0005596060 podman[75597]: 2026-01-26 17:39:46.09823721 +0000 UTC m=+0.148539681 container attach 8357c0222712cb3a1f4c0441ed4a0ba3bf054ee788f8689010ee521c7ef1b7db (image=quay.io/ceph/ceph:v18, name=inspiring_germain, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:39:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Jan 26 12:39:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Set ssh ssh_user
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 26 12:39:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Jan 26 12:39:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Set ssh ssh_config
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 26 12:39:46 np0005596060 inspiring_germain[75613]: ssh user set to ceph-admin. sudo will be used
Jan 26 12:39:46 np0005596060 systemd[1]: libpod-8357c0222712cb3a1f4c0441ed4a0ba3bf054ee788f8689010ee521c7ef1b7db.scope: Deactivated successfully.
Jan 26 12:39:46 np0005596060 podman[75597]: 2026-01-26 17:39:46.692490145 +0000 UTC m=+0.742792606 container died 8357c0222712cb3a1f4c0441ed4a0ba3bf054ee788f8689010ee521c7ef1b7db (image=quay.io/ceph/ceph:v18, name=inspiring_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 12:39:46 np0005596060 systemd[1]: var-lib-containers-storage-overlay-68306f469878eec8b6b613c22ec0b6920eee505e0ea3ea7fec6889db0a25457e-merged.mount: Deactivated successfully.
Jan 26 12:39:46 np0005596060 podman[75597]: 2026-01-26 17:39:46.732059814 +0000 UTC m=+0.782362265 container remove 8357c0222712cb3a1f4c0441ed4a0ba3bf054ee788f8689010ee521c7ef1b7db (image=quay.io/ceph/ceph:v18, name=inspiring_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: [cephadm INFO cherrypy.error] [26/Jan/2026:17:39:46] ENGINE Bus STARTING
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : [26/Jan/2026:17:39:46] ENGINE Bus STARTING
Jan 26 12:39:46 np0005596060 systemd[1]: libpod-conmon-8357c0222712cb3a1f4c0441ed4a0ba3bf054ee788f8689010ee521c7ef1b7db.scope: Deactivated successfully.
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: [cephadm INFO cherrypy.error] [26/Jan/2026:17:39:46] ENGINE Serving on http://192.168.122.100:8765
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : [26/Jan/2026:17:39:46] ENGINE Serving on http://192.168.122.100:8765
Jan 26 12:39:46 np0005596060 podman[75652]: 2026-01-26 17:39:46.863630048 +0000 UTC m=+0.102218372 container create cfc75053e87019a8205bbaad81a080f70ac07b301f801d191a7c8b0e4d666cae (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:46 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.mbryrf(active, since 2s)
Jan 26 12:39:46 np0005596060 podman[75652]: 2026-01-26 17:39:46.832606545 +0000 UTC m=+0.071194849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:46 np0005596060 systemd[1]: Started libpod-conmon-cfc75053e87019a8205bbaad81a080f70ac07b301f801d191a7c8b0e4d666cae.scope.
Jan 26 12:39:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998a0e125338aac483462aac73d5e9434cac36fb457ae2e2cb33defed91a9f04/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998a0e125338aac483462aac73d5e9434cac36fb457ae2e2cb33defed91a9f04/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998a0e125338aac483462aac73d5e9434cac36fb457ae2e2cb33defed91a9f04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998a0e125338aac483462aac73d5e9434cac36fb457ae2e2cb33defed91a9f04/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/998a0e125338aac483462aac73d5e9434cac36fb457ae2e2cb33defed91a9f04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:46 np0005596060 podman[75652]: 2026-01-26 17:39:46.950743042 +0000 UTC m=+0.189331346 container init cfc75053e87019a8205bbaad81a080f70ac07b301f801d191a7c8b0e4d666cae (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:46 np0005596060 podman[75652]: 2026-01-26 17:39:46.95663281 +0000 UTC m=+0.195221144 container start cfc75053e87019a8205bbaad81a080f70ac07b301f801d191a7c8b0e4d666cae (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 26 12:39:46 np0005596060 podman[75652]: 2026-01-26 17:39:46.960108798 +0000 UTC m=+0.198697082 container attach cfc75053e87019a8205bbaad81a080f70ac07b301f801d191a7c8b0e4d666cae (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: [cephadm INFO cherrypy.error] [26/Jan/2026:17:39:46] ENGINE Serving on https://192.168.122.100:7150
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : [26/Jan/2026:17:39:46] ENGINE Serving on https://192.168.122.100:7150
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: [cephadm INFO cherrypy.error] [26/Jan/2026:17:39:46] ENGINE Bus STARTED
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : [26/Jan/2026:17:39:46] ENGINE Bus STARTED
Jan 26 12:39:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 26 12:39:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: [cephadm INFO cherrypy.error] [26/Jan/2026:17:39:46] ENGINE Client ('192.168.122.100', 60382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 12:39:46 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : [26/Jan/2026:17:39:46] ENGINE Client ('192.168.122.100', 60382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 12:39:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:47 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:39:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Jan 26 12:39:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:47 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 26 12:39:47 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 26 12:39:47 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Set ssh private key
Jan 26 12:39:47 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 26 12:39:47 np0005596060 systemd[1]: libpod-cfc75053e87019a8205bbaad81a080f70ac07b301f801d191a7c8b0e4d666cae.scope: Deactivated successfully.
Jan 26 12:39:47 np0005596060 conmon[75691]: conmon cfc75053e87019a8205b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfc75053e87019a8205bbaad81a080f70ac07b301f801d191a7c8b0e4d666cae.scope/container/memory.events
Jan 26 12:39:47 np0005596060 podman[75652]: 2026-01-26 17:39:47.589328932 +0000 UTC m=+0.827917216 container died cfc75053e87019a8205bbaad81a080f70ac07b301f801d191a7c8b0e4d666cae (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 12:39:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-998a0e125338aac483462aac73d5e9434cac36fb457ae2e2cb33defed91a9f04-merged.mount: Deactivated successfully.
Jan 26 12:39:47 np0005596060 podman[75652]: 2026-01-26 17:39:47.632004447 +0000 UTC m=+0.870592731 container remove cfc75053e87019a8205bbaad81a080f70ac07b301f801d191a7c8b0e4d666cae (image=quay.io/ceph/ceph:v18, name=suspicious_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 12:39:47 np0005596060 systemd[1]: libpod-conmon-cfc75053e87019a8205bbaad81a080f70ac07b301f801d191a7c8b0e4d666cae.scope: Deactivated successfully.
Jan 26 12:39:47 np0005596060 podman[75727]: 2026-01-26 17:39:47.706476974 +0000 UTC m=+0.047994881 container create 1065b85e74f893435350ce1695b9e7d0cfede7351a904bd8840d3ac8b499d784 (image=quay.io/ceph/ceph:v18, name=youthful_jones, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 12:39:47 np0005596060 systemd[1]: Started libpod-conmon-1065b85e74f893435350ce1695b9e7d0cfede7351a904bd8840d3ac8b499d784.scope.
Jan 26 12:39:47 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b0ebf12425beb54a6f980c4b453d35f3104849346a663580039fbd73cc1159/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b0ebf12425beb54a6f980c4b453d35f3104849346a663580039fbd73cc1159/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b0ebf12425beb54a6f980c4b453d35f3104849346a663580039fbd73cc1159/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b0ebf12425beb54a6f980c4b453d35f3104849346a663580039fbd73cc1159/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48b0ebf12425beb54a6f980c4b453d35f3104849346a663580039fbd73cc1159/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:47 np0005596060 podman[75727]: 2026-01-26 17:39:47.686469126 +0000 UTC m=+0.027987073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:47 np0005596060 podman[75727]: 2026-01-26 17:39:47.785045813 +0000 UTC m=+0.126563770 container init 1065b85e74f893435350ce1695b9e7d0cfede7351a904bd8840d3ac8b499d784 (image=quay.io/ceph/ceph:v18, name=youthful_jones, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:47 np0005596060 podman[75727]: 2026-01-26 17:39:47.794289434 +0000 UTC m=+0.135807351 container start 1065b85e74f893435350ce1695b9e7d0cfede7351a904bd8840d3ac8b499d784 (image=quay.io/ceph/ceph:v18, name=youthful_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:39:47 np0005596060 podman[75727]: 2026-01-26 17:39:47.797573695 +0000 UTC m=+0.139091612 container attach 1065b85e74f893435350ce1695b9e7d0cfede7351a904bd8840d3ac8b499d784 (image=quay.io/ceph/ceph:v18, name=youthful_jones, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:47 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: Set ssh ssh_user
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: Set ssh ssh_config
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: ssh user set to ceph-admin. sudo will be used
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: [26/Jan/2026:17:39:46] ENGINE Bus STARTING
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: [26/Jan/2026:17:39:46] ENGINE Serving on http://192.168.122.100:8765
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: [26/Jan/2026:17:39:46] ENGINE Serving on https://192.168.122.100:7150
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: [26/Jan/2026:17:39:46] ENGINE Bus STARTED
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: [26/Jan/2026:17:39:46] ENGINE Client ('192.168.122.100', 60382) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:48 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Jan 26 12:39:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:48 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 26 12:39:48 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 26 12:39:48 np0005596060 systemd[1]: libpod-1065b85e74f893435350ce1695b9e7d0cfede7351a904bd8840d3ac8b499d784.scope: Deactivated successfully.
Jan 26 12:39:48 np0005596060 podman[75769]: 2026-01-26 17:39:48.425953819 +0000 UTC m=+0.025570135 container died 1065b85e74f893435350ce1695b9e7d0cfede7351a904bd8840d3ac8b499d784 (image=quay.io/ceph/ceph:v18, name=youthful_jones, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 12:39:48 np0005596060 systemd[1]: var-lib-containers-storage-overlay-48b0ebf12425beb54a6f980c4b453d35f3104849346a663580039fbd73cc1159-merged.mount: Deactivated successfully.
Jan 26 12:39:48 np0005596060 podman[75769]: 2026-01-26 17:39:48.472699433 +0000 UTC m=+0.072315729 container remove 1065b85e74f893435350ce1695b9e7d0cfede7351a904bd8840d3ac8b499d784 (image=quay.io/ceph/ceph:v18, name=youthful_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 12:39:48 np0005596060 systemd[1]: libpod-conmon-1065b85e74f893435350ce1695b9e7d0cfede7351a904bd8840d3ac8b499d784.scope: Deactivated successfully.
Jan 26 12:39:48 np0005596060 podman[75784]: 2026-01-26 17:39:48.551908136 +0000 UTC m=+0.052514329 container create b1b7b810caf8f99161c170ac675b6dc0a2974204d84ed0b69607208e20efb38d (image=quay.io/ceph/ceph:v18, name=nice_kowalevski, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 12:39:48 np0005596060 systemd[1]: Started libpod-conmon-b1b7b810caf8f99161c170ac675b6dc0a2974204d84ed0b69607208e20efb38d.scope.
Jan 26 12:39:48 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdab4ef744a79526f97af221e794ebcfb4422684fa92a5e3dbbebc2480a4d65c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdab4ef744a79526f97af221e794ebcfb4422684fa92a5e3dbbebc2480a4d65c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdab4ef744a79526f97af221e794ebcfb4422684fa92a5e3dbbebc2480a4d65c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:48 np0005596060 podman[75784]: 2026-01-26 17:39:48.528595046 +0000 UTC m=+0.029201329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:48 np0005596060 podman[75784]: 2026-01-26 17:39:48.629218368 +0000 UTC m=+0.129824591 container init b1b7b810caf8f99161c170ac675b6dc0a2974204d84ed0b69607208e20efb38d (image=quay.io/ceph/ceph:v18, name=nice_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:48 np0005596060 podman[75784]: 2026-01-26 17:39:48.635881014 +0000 UTC m=+0.136487217 container start b1b7b810caf8f99161c170ac675b6dc0a2974204d84ed0b69607208e20efb38d (image=quay.io/ceph/ceph:v18, name=nice_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 12:39:48 np0005596060 podman[75784]: 2026-01-26 17:39:48.638708518 +0000 UTC m=+0.139314721 container attach b1b7b810caf8f99161c170ac675b6dc0a2974204d84ed0b69607208e20efb38d (image=quay.io/ceph/ceph:v18, name=nice_kowalevski, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 12:39:49 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:39:49 np0005596060 nice_kowalevski[75801]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfAcRNf16RMDpmm2jX1ClhAeh8o8TcxPXEkwm5Keu+ooLaMi6njkMDC1bBSq6dgjxyII6hv0mCL04IYLopxVW1uAvhbewB9ij+YLWIiKV3HVFUdRtTXH6dsM9OAFUdHCNx4LedUqynTVrIZFOdgGe2wyj9t86PVImQNqb4TL9Eix/KMEdc5CiRtAcspsXmKbcKclR075tXELHGVdSScgqx/b5B+2p1K90fDbyOUuDTZs4aj4aDmdGmetbRvpj0Tek7pZuG+pmYYhZaCDW5BzbYy5sLQTMnlZJe8ZTqsnHokoQXQDgeTXH5pVJkLoF3raR7NEl8XHsnp/JU+Zm2AxCPP6u6KbzibBjtNLHmmc0n1wXqPXKnw8jR23i4m+t7LI+21PG71SHE+Ej8XaLV33QINBKwwImzA7yzChnVGoBXMZZQbHlc4tuLcZAKW98Fu2dNO+Sil4qnu89FugbI+2oxhD6Vk+tBqQeoYGDxBwOTnMdNZ9jkJbnRaTbYQT0eISs= zuul@controller
Jan 26 12:39:49 np0005596060 systemd[1]: libpod-b1b7b810caf8f99161c170ac675b6dc0a2974204d84ed0b69607208e20efb38d.scope: Deactivated successfully.
Jan 26 12:39:49 np0005596060 podman[75784]: 2026-01-26 17:39:49.199075241 +0000 UTC m=+0.699681484 container died b1b7b810caf8f99161c170ac675b6dc0a2974204d84ed0b69607208e20efb38d (image=quay.io/ceph/ceph:v18, name=nice_kowalevski, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fdab4ef744a79526f97af221e794ebcfb4422684fa92a5e3dbbebc2480a4d65c-merged.mount: Deactivated successfully.
Jan 26 12:39:49 np0005596060 podman[75784]: 2026-01-26 17:39:49.248759624 +0000 UTC m=+0.749365827 container remove b1b7b810caf8f99161c170ac675b6dc0a2974204d84ed0b69607208e20efb38d (image=quay.io/ceph/ceph:v18, name=nice_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:49 np0005596060 systemd[1]: libpod-conmon-b1b7b810caf8f99161c170ac675b6dc0a2974204d84ed0b69607208e20efb38d.scope: Deactivated successfully.
Jan 26 12:39:49 np0005596060 podman[75840]: 2026-01-26 17:39:49.313470102 +0000 UTC m=+0.043911941 container create f88351a791461781e1ce5a3d76d0fb5205b2c31acca0c6655efe575a41b9e8b1 (image=quay.io/ceph/ceph:v18, name=happy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:49 np0005596060 ceph-mon[74267]: Set ssh ssh_identity_key
Jan 26 12:39:49 np0005596060 ceph-mon[74267]: Set ssh private key
Jan 26 12:39:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:49 np0005596060 ceph-mon[74267]: Set ssh ssh_identity_pub
Jan 26 12:39:49 np0005596060 podman[75840]: 2026-01-26 17:39:49.293293627 +0000 UTC m=+0.023735486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:49 np0005596060 systemd[1]: Started libpod-conmon-f88351a791461781e1ce5a3d76d0fb5205b2c31acca0c6655efe575a41b9e8b1.scope.
Jan 26 12:39:49 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f117f615f4801ac6a083ab3d87459cbaf6dd254c9bad35a393b57baa1b5c254c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f117f615f4801ac6a083ab3d87459cbaf6dd254c9bad35a393b57baa1b5c254c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f117f615f4801ac6a083ab3d87459cbaf6dd254c9bad35a393b57baa1b5c254c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:49 np0005596060 podman[75840]: 2026-01-26 17:39:49.445817105 +0000 UTC m=+0.176259004 container init f88351a791461781e1ce5a3d76d0fb5205b2c31acca0c6655efe575a41b9e8b1 (image=quay.io/ceph/ceph:v18, name=happy_wilbur, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:49 np0005596060 podman[75840]: 2026-01-26 17:39:49.454308248 +0000 UTC m=+0.184750077 container start f88351a791461781e1ce5a3d76d0fb5205b2c31acca0c6655efe575a41b9e8b1 (image=quay.io/ceph/ceph:v18, name=happy_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:39:49 np0005596060 podman[75840]: 2026-01-26 17:39:49.461824885 +0000 UTC m=+0.192266744 container attach f88351a791461781e1ce5a3d76d0fb5205b2c31acca0c6655efe575a41b9e8b1 (image=quay.io/ceph/ceph:v18, name=happy_wilbur, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053148 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:39:49 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:39:50 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:39:50 np0005596060 systemd[1]: Created slice User Slice of UID 42477.
Jan 26 12:39:50 np0005596060 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 26 12:39:50 np0005596060 systemd-logind[786]: New session 21 of user ceph-admin.
Jan 26 12:39:50 np0005596060 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 26 12:39:50 np0005596060 systemd[1]: Starting User Manager for UID 42477...
Jan 26 12:39:50 np0005596060 systemd[75887]: Queued start job for default target Main User Target.
Jan 26 12:39:50 np0005596060 systemd[75887]: Created slice User Application Slice.
Jan 26 12:39:50 np0005596060 systemd[75887]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 26 12:39:50 np0005596060 systemd[75887]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 12:39:50 np0005596060 systemd[75887]: Reached target Paths.
Jan 26 12:39:50 np0005596060 systemd[75887]: Reached target Timers.
Jan 26 12:39:50 np0005596060 systemd[75887]: Starting D-Bus User Message Bus Socket...
Jan 26 12:39:50 np0005596060 systemd[75887]: Starting Create User's Volatile Files and Directories...
Jan 26 12:39:50 np0005596060 systemd[75887]: Finished Create User's Volatile Files and Directories.
Jan 26 12:39:50 np0005596060 systemd[75887]: Listening on D-Bus User Message Bus Socket.
Jan 26 12:39:50 np0005596060 systemd[75887]: Reached target Sockets.
Jan 26 12:39:50 np0005596060 systemd[75887]: Reached target Basic System.
Jan 26 12:39:50 np0005596060 systemd[75887]: Reached target Main User Target.
Jan 26 12:39:50 np0005596060 systemd[75887]: Startup finished in 129ms.
Jan 26 12:39:50 np0005596060 systemd[1]: Started User Manager for UID 42477.
Jan 26 12:39:50 np0005596060 systemd[1]: Started Session 21 of User ceph-admin.
Jan 26 12:39:50 np0005596060 systemd-logind[786]: New session 23 of user ceph-admin.
Jan 26 12:39:50 np0005596060 systemd[1]: Started Session 23 of User ceph-admin.
Jan 26 12:39:50 np0005596060 systemd-logind[786]: New session 24 of user ceph-admin.
Jan 26 12:39:50 np0005596060 systemd[1]: Started Session 24 of User ceph-admin.
Jan 26 12:39:51 np0005596060 systemd-logind[786]: New session 25 of user ceph-admin.
Jan 26 12:39:51 np0005596060 systemd[1]: Started Session 25 of User ceph-admin.
Jan 26 12:39:51 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 26 12:39:51 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 26 12:39:51 np0005596060 systemd-logind[786]: New session 26 of user ceph-admin.
Jan 26 12:39:51 np0005596060 systemd[1]: Started Session 26 of User ceph-admin.
Jan 26 12:39:51 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:39:52 np0005596060 systemd-logind[786]: New session 27 of user ceph-admin.
Jan 26 12:39:52 np0005596060 systemd[1]: Started Session 27 of User ceph-admin.
Jan 26 12:39:52 np0005596060 ceph-mon[74267]: Deploying cephadm binary to compute-0
Jan 26 12:39:52 np0005596060 systemd-logind[786]: New session 28 of user ceph-admin.
Jan 26 12:39:52 np0005596060 systemd[1]: Started Session 28 of User ceph-admin.
Jan 26 12:39:53 np0005596060 systemd-logind[786]: New session 29 of user ceph-admin.
Jan 26 12:39:53 np0005596060 systemd[1]: Started Session 29 of User ceph-admin.
Jan 26 12:39:53 np0005596060 systemd-logind[786]: New session 30 of user ceph-admin.
Jan 26 12:39:53 np0005596060 systemd[1]: Started Session 30 of User ceph-admin.
Jan 26 12:39:53 np0005596060 systemd-logind[786]: New session 31 of user ceph-admin.
Jan 26 12:39:53 np0005596060 systemd[1]: Started Session 31 of User ceph-admin.
Jan 26 12:39:53 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:39:54 np0005596060 systemd-logind[786]: New session 32 of user ceph-admin.
Jan 26 12:39:54 np0005596060 systemd[1]: Started Session 32 of User ceph-admin.
Jan 26 12:39:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:39:54 np0005596060 systemd-logind[786]: New session 33 of user ceph-admin.
Jan 26 12:39:54 np0005596060 systemd[1]: Started Session 33 of User ceph-admin.
Jan 26 12:39:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 26 12:39:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:55 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Added host compute-0
Jan 26 12:39:55 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 26 12:39:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 26 12:39:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 12:39:55 np0005596060 happy_wilbur[75857]: Added host 'compute-0' with addr '192.168.122.100'
Jan 26 12:39:55 np0005596060 systemd[1]: libpod-f88351a791461781e1ce5a3d76d0fb5205b2c31acca0c6655efe575a41b9e8b1.scope: Deactivated successfully.
Jan 26 12:39:55 np0005596060 podman[75840]: 2026-01-26 17:39:55.599716856 +0000 UTC m=+6.330158685 container died f88351a791461781e1ce5a3d76d0fb5205b2c31acca0c6655efe575a41b9e8b1 (image=quay.io/ceph/ceph:v18, name=happy_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:39:55 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f117f615f4801ac6a083ab3d87459cbaf6dd254c9bad35a393b57baa1b5c254c-merged.mount: Deactivated successfully.
Jan 26 12:39:55 np0005596060 podman[75840]: 2026-01-26 17:39:55.878548833 +0000 UTC m=+6.608990662 container remove f88351a791461781e1ce5a3d76d0fb5205b2c31acca0c6655efe575a41b9e8b1 (image=quay.io/ceph/ceph:v18, name=happy_wilbur, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:39:55 np0005596060 systemd[1]: libpod-conmon-f88351a791461781e1ce5a3d76d0fb5205b2c31acca0c6655efe575a41b9e8b1.scope: Deactivated successfully.
Jan 26 12:39:55 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:39:55 np0005596060 podman[76602]: 2026-01-26 17:39:55.988122895 +0000 UTC m=+0.080574783 container create f6be3fd220b92b36841c03b46ab343c73e3ec6f1fdab92a0c253c0da82896660 (image=quay.io/ceph/ceph:v18, name=admiring_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 12:39:56 np0005596060 systemd[1]: Started libpod-conmon-f6be3fd220b92b36841c03b46ab343c73e3ec6f1fdab92a0c253c0da82896660.scope.
Jan 26 12:39:56 np0005596060 podman[76602]: 2026-01-26 17:39:55.942247983 +0000 UTC m=+0.034699911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:56 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7efee0bcd5b0566fee9f8c604e415fb8535df12af7ceb141a3591e592b9062ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7efee0bcd5b0566fee9f8c604e415fb8535df12af7ceb141a3591e592b9062ce/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7efee0bcd5b0566fee9f8c604e415fb8535df12af7ceb141a3591e592b9062ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:56 np0005596060 podman[76602]: 2026-01-26 17:39:56.081634225 +0000 UTC m=+0.174086163 container init f6be3fd220b92b36841c03b46ab343c73e3ec6f1fdab92a0c253c0da82896660 (image=quay.io/ceph/ceph:v18, name=admiring_diffie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 12:39:56 np0005596060 podman[76602]: 2026-01-26 17:39:56.091699596 +0000 UTC m=+0.184151474 container start f6be3fd220b92b36841c03b46ab343c73e3ec6f1fdab92a0c253c0da82896660 (image=quay.io/ceph/ceph:v18, name=admiring_diffie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 12:39:56 np0005596060 podman[76602]: 2026-01-26 17:39:56.096585056 +0000 UTC m=+0.189036934 container attach f6be3fd220b92b36841c03b46ab343c73e3ec6f1fdab92a0c253c0da82896660 (image=quay.io/ceph/ceph:v18, name=admiring_diffie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 12:39:56 np0005596060 podman[76649]: 2026-01-26 17:39:56.184712087 +0000 UTC m=+0.042697056 container create 58e5ad1a19151bc54b1997a8cc389ccb294b1e9e0045d42b118ab7b42cb905c7 (image=quay.io/ceph/ceph:v18, name=clever_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 12:39:56 np0005596060 systemd[1]: Started libpod-conmon-58e5ad1a19151bc54b1997a8cc389ccb294b1e9e0045d42b118ab7b42cb905c7.scope.
Jan 26 12:39:56 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:56 np0005596060 podman[76649]: 2026-01-26 17:39:56.165283281 +0000 UTC m=+0.023268270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:56 np0005596060 podman[76649]: 2026-01-26 17:39:56.269310299 +0000 UTC m=+0.127295288 container init 58e5ad1a19151bc54b1997a8cc389ccb294b1e9e0045d42b118ab7b42cb905c7 (image=quay.io/ceph/ceph:v18, name=clever_goldstine, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:56 np0005596060 podman[76649]: 2026-01-26 17:39:56.275732916 +0000 UTC m=+0.133717885 container start 58e5ad1a19151bc54b1997a8cc389ccb294b1e9e0045d42b118ab7b42cb905c7 (image=quay.io/ceph/ceph:v18, name=clever_goldstine, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:39:56 np0005596060 podman[76649]: 2026-01-26 17:39:56.279567977 +0000 UTC m=+0.137552976 container attach 58e5ad1a19151bc54b1997a8cc389ccb294b1e9e0045d42b118ab7b42cb905c7 (image=quay.io/ceph/ceph:v18, name=clever_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:56 np0005596060 ceph-mon[74267]: Added host compute-0
Jan 26 12:39:56 np0005596060 clever_goldstine[76666]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 26 12:39:56 np0005596060 systemd[1]: libpod-58e5ad1a19151bc54b1997a8cc389ccb294b1e9e0045d42b118ab7b42cb905c7.scope: Deactivated successfully.
Jan 26 12:39:56 np0005596060 podman[76690]: 2026-01-26 17:39:56.634757061 +0000 UTC m=+0.024098460 container died 58e5ad1a19151bc54b1997a8cc389ccb294b1e9e0045d42b118ab7b42cb905c7 (image=quay.io/ceph/ceph:v18, name=clever_goldstine, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:39:56 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:39:56 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 26 12:39:56 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 26 12:39:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 26 12:39:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:56 np0005596060 admiring_diffie[76620]: Scheduled mon update...
Jan 26 12:39:56 np0005596060 systemd[1]: libpod-f6be3fd220b92b36841c03b46ab343c73e3ec6f1fdab92a0c253c0da82896660.scope: Deactivated successfully.
Jan 26 12:39:57 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c8998c525722d526785f4e34521f52b7b29188e68f0a8804ab68af1a9d722538-merged.mount: Deactivated successfully.
Jan 26 12:39:57 np0005596060 podman[76690]: 2026-01-26 17:39:57.101120527 +0000 UTC m=+0.490461906 container remove 58e5ad1a19151bc54b1997a8cc389ccb294b1e9e0045d42b118ab7b42cb905c7 (image=quay.io/ceph/ceph:v18, name=clever_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:39:57 np0005596060 systemd[1]: libpod-conmon-58e5ad1a19151bc54b1997a8cc389ccb294b1e9e0045d42b118ab7b42cb905c7.scope: Deactivated successfully.
Jan 26 12:39:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Jan 26 12:39:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:57 np0005596060 podman[76602]: 2026-01-26 17:39:57.185568993 +0000 UTC m=+1.278020871 container died f6be3fd220b92b36841c03b46ab343c73e3ec6f1fdab92a0c253c0da82896660 (image=quay.io/ceph/ceph:v18, name=admiring_diffie, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:57 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7efee0bcd5b0566fee9f8c604e415fb8535df12af7ceb141a3591e592b9062ce-merged.mount: Deactivated successfully.
Jan 26 12:39:57 np0005596060 podman[76703]: 2026-01-26 17:39:57.225484026 +0000 UTC m=+0.456437651 container remove f6be3fd220b92b36841c03b46ab343c73e3ec6f1fdab92a0c253c0da82896660 (image=quay.io/ceph/ceph:v18, name=admiring_diffie, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 12:39:57 np0005596060 systemd[1]: libpod-conmon-f6be3fd220b92b36841c03b46ab343c73e3ec6f1fdab92a0c253c0da82896660.scope: Deactivated successfully.
Jan 26 12:39:57 np0005596060 podman[76750]: 2026-01-26 17:39:57.293919571 +0000 UTC m=+0.047528035 container create 871aa07d18c8b5368078821c58a2083d197d59aae5754870cd9b809bbd34fff5 (image=quay.io/ceph/ceph:v18, name=sleepy_fermat, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:57 np0005596060 systemd[1]: Started libpod-conmon-871aa07d18c8b5368078821c58a2083d197d59aae5754870cd9b809bbd34fff5.scope.
Jan 26 12:39:57 np0005596060 podman[76750]: 2026-01-26 17:39:57.273084712 +0000 UTC m=+0.026693196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:57 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92fa43b7d8b5856bbaa1ff12d5beb8ebadfd9bb8e345356c2e8ae0f460b7e6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92fa43b7d8b5856bbaa1ff12d5beb8ebadfd9bb8e345356c2e8ae0f460b7e6c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92fa43b7d8b5856bbaa1ff12d5beb8ebadfd9bb8e345356c2e8ae0f460b7e6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:57 np0005596060 podman[76750]: 2026-01-26 17:39:57.391338825 +0000 UTC m=+0.144947309 container init 871aa07d18c8b5368078821c58a2083d197d59aae5754870cd9b809bbd34fff5 (image=quay.io/ceph/ceph:v18, name=sleepy_fermat, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:57 np0005596060 podman[76750]: 2026-01-26 17:39:57.396947292 +0000 UTC m=+0.150555756 container start 871aa07d18c8b5368078821c58a2083d197d59aae5754870cd9b809bbd34fff5 (image=quay.io/ceph/ceph:v18, name=sleepy_fermat, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 12:39:57 np0005596060 podman[76750]: 2026-01-26 17:39:57.400570195 +0000 UTC m=+0.154178689 container attach 871aa07d18c8b5368078821c58a2083d197d59aae5754870cd9b809bbd34fff5 (image=quay.io/ceph/ceph:v18, name=sleepy_fermat, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:39:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:39:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:57 np0005596060 ceph-mon[74267]: Saving service mon spec with placement count:5
Jan 26 12:39:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:57 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:39:57 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 26 12:39:57 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 26 12:39:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 26 12:39:57 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:39:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:57 np0005596060 sleepy_fermat[76808]: Scheduled mgr update...
Jan 26 12:39:57 np0005596060 systemd[1]: libpod-871aa07d18c8b5368078821c58a2083d197d59aae5754870cd9b809bbd34fff5.scope: Deactivated successfully.
Jan 26 12:39:57 np0005596060 podman[76750]: 2026-01-26 17:39:57.979443962 +0000 UTC m=+0.733052426 container died 871aa07d18c8b5368078821c58a2083d197d59aae5754870cd9b809bbd34fff5 (image=quay.io/ceph/ceph:v18, name=sleepy_fermat, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:39:58 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a92fa43b7d8b5856bbaa1ff12d5beb8ebadfd9bb8e345356c2e8ae0f460b7e6c-merged.mount: Deactivated successfully.
Jan 26 12:39:58 np0005596060 podman[76750]: 2026-01-26 17:39:58.02924827 +0000 UTC m=+0.782856734 container remove 871aa07d18c8b5368078821c58a2083d197d59aae5754870cd9b809bbd34fff5 (image=quay.io/ceph/ceph:v18, name=sleepy_fermat, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 12:39:58 np0005596060 systemd[1]: libpod-conmon-871aa07d18c8b5368078821c58a2083d197d59aae5754870cd9b809bbd34fff5.scope: Deactivated successfully.
Jan 26 12:39:58 np0005596060 podman[76995]: 2026-01-26 17:39:58.092881417 +0000 UTC m=+0.045804570 container create 24fecf3966b1bb8ffd2839b39bb172ae224dcfedb1dbbec2b437a9896596fe88 (image=quay.io/ceph/ceph:v18, name=admiring_bhaskara, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:58 np0005596060 systemd[1]: Started libpod-conmon-24fecf3966b1bb8ffd2839b39bb172ae224dcfedb1dbbec2b437a9896596fe88.scope.
Jan 26 12:39:58 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d51cb5de630d3b12137bf06c32395dc648c468fcc42b4bc5c763addb7b4f05/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d51cb5de630d3b12137bf06c32395dc648c468fcc42b4bc5c763addb7b4f05/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d51cb5de630d3b12137bf06c32395dc648c468fcc42b4bc5c763addb7b4f05/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:58 np0005596060 podman[76995]: 2026-01-26 17:39:58.072067079 +0000 UTC m=+0.024990242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:58 np0005596060 podman[76995]: 2026-01-26 17:39:58.17236568 +0000 UTC m=+0.125288863 container init 24fecf3966b1bb8ffd2839b39bb172ae224dcfedb1dbbec2b437a9896596fe88 (image=quay.io/ceph/ceph:v18, name=admiring_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:39:58 np0005596060 podman[76995]: 2026-01-26 17:39:58.18023262 +0000 UTC m=+0.133155763 container start 24fecf3966b1bb8ffd2839b39bb172ae224dcfedb1dbbec2b437a9896596fe88 (image=quay.io/ceph/ceph:v18, name=admiring_bhaskara, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 26 12:39:58 np0005596060 podman[76995]: 2026-01-26 17:39:58.183323934 +0000 UTC m=+0.136247117 container attach 24fecf3966b1bb8ffd2839b39bb172ae224dcfedb1dbbec2b437a9896596fe88 (image=quay.io/ceph/ceph:v18, name=admiring_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 12:39:58 np0005596060 podman[77083]: 2026-01-26 17:39:58.527332306 +0000 UTC m=+0.069945452 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:39:58 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:39:58 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Saving service crash spec with placement *
Jan 26 12:39:58 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 26 12:39:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 26 12:39:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:58 np0005596060 admiring_bhaskara[77034]: Scheduled crash update...
Jan 26 12:39:58 np0005596060 systemd[1]: libpod-24fecf3966b1bb8ffd2839b39bb172ae224dcfedb1dbbec2b437a9896596fe88.scope: Deactivated successfully.
Jan 26 12:39:58 np0005596060 podman[76995]: 2026-01-26 17:39:58.801548033 +0000 UTC m=+0.754471196 container died 24fecf3966b1bb8ffd2839b39bb172ae224dcfedb1dbbec2b437a9896596fe88 (image=quay.io/ceph/ceph:v18, name=admiring_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Jan 26 12:39:58 np0005596060 systemd[1]: var-lib-containers-storage-overlay-12d51cb5de630d3b12137bf06c32395dc648c468fcc42b4bc5c763addb7b4f05-merged.mount: Deactivated successfully.
Jan 26 12:39:58 np0005596060 podman[76995]: 2026-01-26 17:39:58.856581223 +0000 UTC m=+0.809504376 container remove 24fecf3966b1bb8ffd2839b39bb172ae224dcfedb1dbbec2b437a9896596fe88 (image=quay.io/ceph/ceph:v18, name=admiring_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:39:58 np0005596060 systemd[1]: libpod-conmon-24fecf3966b1bb8ffd2839b39bb172ae224dcfedb1dbbec2b437a9896596fe88.scope: Deactivated successfully.
Jan 26 12:39:58 np0005596060 podman[77083]: 2026-01-26 17:39:58.886137804 +0000 UTC m=+0.428750950 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 12:39:58 np0005596060 podman[77134]: 2026-01-26 17:39:58.945563956 +0000 UTC m=+0.056831997 container create ae1ff99730ddfe0ab06196a44014a20ee82881773c534a952ec40f54a65d1fdc (image=quay.io/ceph/ceph:v18, name=wizardly_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:58 np0005596060 ceph-mon[74267]: Saving service mgr spec with placement count:2
Jan 26 12:39:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:58 np0005596060 systemd[1]: Started libpod-conmon-ae1ff99730ddfe0ab06196a44014a20ee82881773c534a952ec40f54a65d1fdc.scope.
Jan 26 12:39:59 np0005596060 podman[77134]: 2026-01-26 17:39:58.917261182 +0000 UTC m=+0.028529253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:39:59 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:39:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b444a04b362317ac1b712c86c4843ad33eb0dcb7d04dd24c9586e9f3aa059d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b444a04b362317ac1b712c86c4843ad33eb0dcb7d04dd24c9586e9f3aa059d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b444a04b362317ac1b712c86c4843ad33eb0dcb7d04dd24c9586e9f3aa059d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:39:59 np0005596060 podman[77134]: 2026-01-26 17:39:59.042250843 +0000 UTC m=+0.153518874 container init ae1ff99730ddfe0ab06196a44014a20ee82881773c534a952ec40f54a65d1fdc (image=quay.io/ceph/ceph:v18, name=wizardly_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 26 12:39:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:39:59 np0005596060 podman[77134]: 2026-01-26 17:39:59.048446632 +0000 UTC m=+0.159714663 container start ae1ff99730ddfe0ab06196a44014a20ee82881773c534a952ec40f54a65d1fdc (image=quay.io/ceph/ceph:v18, name=wizardly_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:39:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:39:59 np0005596060 podman[77134]: 2026-01-26 17:39:59.053287411 +0000 UTC m=+0.164555482 container attach ae1ff99730ddfe0ab06196a44014a20ee82881773c534a952ec40f54a65d1fdc (image=quay.io/ceph/ceph:v18, name=wizardly_boyd, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 12:39:59 np0005596060 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77314 (sysctl)
Jan 26 12:39:59 np0005596060 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 26 12:39:59 np0005596060 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 26 12:39:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Jan 26 12:39:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:39:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1755047401' entity='client.admin' 
Jan 26 12:39:59 np0005596060 systemd[1]: libpod-ae1ff99730ddfe0ab06196a44014a20ee82881773c534a952ec40f54a65d1fdc.scope: Deactivated successfully.
Jan 26 12:39:59 np0005596060 podman[77134]: 2026-01-26 17:39:59.844821913 +0000 UTC m=+0.956089984 container died ae1ff99730ddfe0ab06196a44014a20ee82881773c534a952ec40f54a65d1fdc (image=quay.io/ceph/ceph:v18, name=wizardly_boyd, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:39:59 np0005596060 systemd[1]: var-lib-containers-storage-overlay-23b444a04b362317ac1b712c86c4843ad33eb0dcb7d04dd24c9586e9f3aa059d-merged.mount: Deactivated successfully.
Jan 26 12:39:59 np0005596060 podman[77134]: 2026-01-26 17:39:59.913001768 +0000 UTC m=+1.024269789 container remove ae1ff99730ddfe0ab06196a44014a20ee82881773c534a952ec40f54a65d1fdc (image=quay.io/ceph/ceph:v18, name=wizardly_boyd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:39:59 np0005596060 systemd[1]: libpod-conmon-ae1ff99730ddfe0ab06196a44014a20ee82881773c534a952ec40f54a65d1fdc.scope: Deactivated successfully.
Jan 26 12:39:59 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:39:59 np0005596060 podman[77365]: 2026-01-26 17:39:59.980149185 +0000 UTC m=+0.045507440 container create c44a432c942d64d44e48f808ab0c5d96703ba5d6c1b7eb3483813472584d49b5 (image=quay.io/ceph/ceph:v18, name=zealous_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:40:00 np0005596060 systemd[1]: Started libpod-conmon-c44a432c942d64d44e48f808ab0c5d96703ba5d6c1b7eb3483813472584d49b5.scope.
Jan 26 12:40:00 np0005596060 podman[77365]: 2026-01-26 17:39:59.960745669 +0000 UTC m=+0.026103944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:00 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:00 np0005596060 ceph-mon[74267]: Saving service crash spec with placement *
Jan 26 12:40:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:00 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1755047401' entity='client.admin' 
Jan 26 12:40:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f773f73b4db193d4a0f68f44b85eb0e8fe8df3be94b4f17f38f0f03ef0e09f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f773f73b4db193d4a0f68f44b85eb0e8fe8df3be94b4f17f38f0f03ef0e09f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17f773f73b4db193d4a0f68f44b85eb0e8fe8df3be94b4f17f38f0f03ef0e09f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:00 np0005596060 podman[77365]: 2026-01-26 17:40:00.072617837 +0000 UTC m=+0.137976112 container init c44a432c942d64d44e48f808ab0c5d96703ba5d6c1b7eb3483813472584d49b5 (image=quay.io/ceph/ceph:v18, name=zealous_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:40:00 np0005596060 podman[77365]: 2026-01-26 17:40:00.0813687 +0000 UTC m=+0.146726955 container start c44a432c942d64d44e48f808ab0c5d96703ba5d6c1b7eb3483813472584d49b5 (image=quay.io/ceph/ceph:v18, name=zealous_maxwell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 12:40:00 np0005596060 podman[77365]: 2026-01-26 17:40:00.085211982 +0000 UTC m=+0.150570237 container attach c44a432c942d64d44e48f808ab0c5d96703ba5d6c1b7eb3483813472584d49b5 (image=quay.io/ceph/ceph:v18, name=zealous_maxwell, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:40:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:40:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:00 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:40:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Jan 26 12:40:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:00 np0005596060 systemd[1]: libpod-c44a432c942d64d44e48f808ab0c5d96703ba5d6c1b7eb3483813472584d49b5.scope: Deactivated successfully.
Jan 26 12:40:00 np0005596060 podman[77606]: 2026-01-26 17:40:00.714739297 +0000 UTC m=+0.032210729 container died c44a432c942d64d44e48f808ab0c5d96703ba5d6c1b7eb3483813472584d49b5 (image=quay.io/ceph/ceph:v18, name=zealous_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 12:40:00 np0005596060 systemd[1]: var-lib-containers-storage-overlay-17f773f73b4db193d4a0f68f44b85eb0e8fe8df3be94b4f17f38f0f03ef0e09f-merged.mount: Deactivated successfully.
Jan 26 12:40:00 np0005596060 podman[77606]: 2026-01-26 17:40:00.758835974 +0000 UTC m=+0.076307416 container remove c44a432c942d64d44e48f808ab0c5d96703ba5d6c1b7eb3483813472584d49b5 (image=quay.io/ceph/ceph:v18, name=zealous_maxwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:40:00 np0005596060 systemd[1]: libpod-conmon-c44a432c942d64d44e48f808ab0c5d96703ba5d6c1b7eb3483813472584d49b5.scope: Deactivated successfully.
Jan 26 12:40:00 np0005596060 podman[77633]: 2026-01-26 17:40:00.837453065 +0000 UTC m=+0.049695095 container create 426b8b5af8cc4512f4c207d43cd3f8f121d1d0d9151c3d98206a5291fe603e50 (image=quay.io/ceph/ceph:v18, name=recursing_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:40:00 np0005596060 systemd[1]: Started libpod-conmon-426b8b5af8cc4512f4c207d43cd3f8f121d1d0d9151c3d98206a5291fe603e50.scope.
Jan 26 12:40:00 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c03cd90129ecb09d920d0dac836922e9ee02999ecc3be871df74cb4853451b3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c03cd90129ecb09d920d0dac836922e9ee02999ecc3be871df74cb4853451b3b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c03cd90129ecb09d920d0dac836922e9ee02999ecc3be871df74cb4853451b3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:00 np0005596060 podman[77633]: 2026-01-26 17:40:00.817392355 +0000 UTC m=+0.029634415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:00 np0005596060 podman[77633]: 2026-01-26 17:40:00.927088262 +0000 UTC m=+0.139330332 container init 426b8b5af8cc4512f4c207d43cd3f8f121d1d0d9151c3d98206a5291fe603e50 (image=quay.io/ceph/ceph:v18, name=recursing_liskov, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 12:40:00 np0005596060 podman[77633]: 2026-01-26 17:40:00.934308018 +0000 UTC m=+0.146550048 container start 426b8b5af8cc4512f4c207d43cd3f8f121d1d0d9151c3d98206a5291fe603e50 (image=quay.io/ceph/ceph:v18, name=recursing_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 12:40:00 np0005596060 podman[77633]: 2026-01-26 17:40:00.937278728 +0000 UTC m=+0.149520768 container attach 426b8b5af8cc4512f4c207d43cd3f8f121d1d0d9151c3d98206a5291fe603e50 (image=quay.io/ceph/ceph:v18, name=recursing_liskov, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 12:40:01 np0005596060 podman[77679]: 2026-01-26 17:40:01.006383607 +0000 UTC m=+0.043074320 container create a11dbffb0574f07df009a0feb3a189220ea61ca1fccaa70987c88054469d9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:40:01 np0005596060 systemd[1]: Started libpod-conmon-a11dbffb0574f07df009a0feb3a189220ea61ca1fccaa70987c88054469d9e74.scope.
Jan 26 12:40:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:01 np0005596060 podman[77679]: 2026-01-26 17:40:01.072067371 +0000 UTC m=+0.108758084 container init a11dbffb0574f07df009a0feb3a189220ea61ca1fccaa70987c88054469d9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:40:01 np0005596060 podman[77679]: 2026-01-26 17:40:01.078080712 +0000 UTC m=+0.114771425 container start a11dbffb0574f07df009a0feb3a189220ea61ca1fccaa70987c88054469d9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 12:40:01 np0005596060 podman[77679]: 2026-01-26 17:40:01.080840034 +0000 UTC m=+0.117530927 container attach a11dbffb0574f07df009a0feb3a189220ea61ca1fccaa70987c88054469d9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 12:40:01 np0005596060 sad_lichterman[77695]: 167 167
Jan 26 12:40:01 np0005596060 systemd[1]: libpod-a11dbffb0574f07df009a0feb3a189220ea61ca1fccaa70987c88054469d9e74.scope: Deactivated successfully.
Jan 26 12:40:01 np0005596060 podman[77679]: 2026-01-26 17:40:00.988481827 +0000 UTC m=+0.025172590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:40:01 np0005596060 podman[77679]: 2026-01-26 17:40:01.084069283 +0000 UTC m=+0.120760026 container died a11dbffb0574f07df009a0feb3a189220ea61ca1fccaa70987c88054469d9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:40:01 np0005596060 systemd[1]: var-lib-containers-storage-overlay-274ab93abfc3336fd44271219dd04ae080e10d7ac9bc1b3641bcaeb7937f25b4-merged.mount: Deactivated successfully.
Jan 26 12:40:01 np0005596060 podman[77679]: 2026-01-26 17:40:01.124429372 +0000 UTC m=+0.161120085 container remove a11dbffb0574f07df009a0feb3a189220ea61ca1fccaa70987c88054469d9e74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_lichterman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 26 12:40:01 np0005596060 systemd[1]: libpod-conmon-a11dbffb0574f07df009a0feb3a189220ea61ca1fccaa70987c88054469d9e74.scope: Deactivated successfully.
Jan 26 12:40:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:01 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:40:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 26 12:40:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:01 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Added label _admin to host compute-0
Jan 26 12:40:01 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 26 12:40:01 np0005596060 recursing_liskov[77674]: Added label _admin to host compute-0
Jan 26 12:40:01 np0005596060 systemd[1]: libpod-426b8b5af8cc4512f4c207d43cd3f8f121d1d0d9151c3d98206a5291fe603e50.scope: Deactivated successfully.
Jan 26 12:40:01 np0005596060 podman[77633]: 2026-01-26 17:40:01.491385751 +0000 UTC m=+0.703627771 container died 426b8b5af8cc4512f4c207d43cd3f8f121d1d0d9151c3d98206a5291fe603e50 (image=quay.io/ceph/ceph:v18, name=recursing_liskov, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:40:01 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c03cd90129ecb09d920d0dac836922e9ee02999ecc3be871df74cb4853451b3b-merged.mount: Deactivated successfully.
Jan 26 12:40:01 np0005596060 podman[77633]: 2026-01-26 17:40:01.532511408 +0000 UTC m=+0.744753428 container remove 426b8b5af8cc4512f4c207d43cd3f8f121d1d0d9151c3d98206a5291fe603e50 (image=quay.io/ceph/ceph:v18, name=recursing_liskov, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:40:01 np0005596060 systemd[1]: libpod-conmon-426b8b5af8cc4512f4c207d43cd3f8f121d1d0d9151c3d98206a5291fe603e50.scope: Deactivated successfully.
Jan 26 12:40:01 np0005596060 podman[77746]: 2026-01-26 17:40:01.587908002 +0000 UTC m=+0.036836990 container create ad619fd9de182388d92ed6f24e7a73ac2885dd676c18adb3244dadcfc34e6c32 (image=quay.io/ceph/ceph:v18, name=blissful_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 12:40:01 np0005596060 systemd[1]: Started libpod-conmon-ad619fd9de182388d92ed6f24e7a73ac2885dd676c18adb3244dadcfc34e6c32.scope.
Jan 26 12:40:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5663468a96d9a59f8f263e9693e8ff004dfc01d2a334c10c63fabd8049c42e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5663468a96d9a59f8f263e9693e8ff004dfc01d2a334c10c63fabd8049c42e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5663468a96d9a59f8f263e9693e8ff004dfc01d2a334c10c63fabd8049c42e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:01 np0005596060 podman[77746]: 2026-01-26 17:40:01.655409272 +0000 UTC m=+0.104338280 container init ad619fd9de182388d92ed6f24e7a73ac2885dd676c18adb3244dadcfc34e6c32 (image=quay.io/ceph/ceph:v18, name=blissful_bouman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 26 12:40:01 np0005596060 podman[77746]: 2026-01-26 17:40:01.661763167 +0000 UTC m=+0.110692155 container start ad619fd9de182388d92ed6f24e7a73ac2885dd676c18adb3244dadcfc34e6c32 (image=quay.io/ceph/ceph:v18, name=blissful_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 12:40:01 np0005596060 podman[77746]: 2026-01-26 17:40:01.665415202 +0000 UTC m=+0.114344190 container attach ad619fd9de182388d92ed6f24e7a73ac2885dd676c18adb3244dadcfc34e6c32 (image=quay.io/ceph/ceph:v18, name=blissful_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 12:40:01 np0005596060 podman[77746]: 2026-01-26 17:40:01.571678023 +0000 UTC m=+0.020607031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:01 np0005596060 ceph-mgr[74563]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 26 12:40:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Jan 26 12:40:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2561979900' entity='client.admin' 
Jan 26 12:40:02 np0005596060 systemd[1]: libpod-ad619fd9de182388d92ed6f24e7a73ac2885dd676c18adb3244dadcfc34e6c32.scope: Deactivated successfully.
Jan 26 12:40:02 np0005596060 podman[77746]: 2026-01-26 17:40:02.379446205 +0000 UTC m=+0.828375213 container died ad619fd9de182388d92ed6f24e7a73ac2885dd676c18adb3244dadcfc34e6c32 (image=quay.io/ceph/ceph:v18, name=blissful_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:40:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2d5663468a96d9a59f8f263e9693e8ff004dfc01d2a334c10c63fabd8049c42e-merged.mount: Deactivated successfully.
Jan 26 12:40:02 np0005596060 podman[77746]: 2026-01-26 17:40:02.417053583 +0000 UTC m=+0.865982571 container remove ad619fd9de182388d92ed6f24e7a73ac2885dd676c18adb3244dadcfc34e6c32 (image=quay.io/ceph/ceph:v18, name=blissful_bouman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 12:40:02 np0005596060 systemd[1]: libpod-conmon-ad619fd9de182388d92ed6f24e7a73ac2885dd676c18adb3244dadcfc34e6c32.scope: Deactivated successfully.
Jan 26 12:40:02 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:02 np0005596060 ceph-mon[74267]: Added label _admin to host compute-0
Jan 26 12:40:02 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2561979900' entity='client.admin' 
Jan 26 12:40:02 np0005596060 podman[77800]: 2026-01-26 17:40:02.482415024 +0000 UTC m=+0.048869804 container create 8fc78ea20cf2e564f201e5cb338df9eb9c9e35d6c1d48aad4da926fe188feeea (image=quay.io/ceph/ceph:v18, name=great_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 12:40:02 np0005596060 systemd[1]: Started libpod-conmon-8fc78ea20cf2e564f201e5cb338df9eb9c9e35d6c1d48aad4da926fe188feeea.scope.
Jan 26 12:40:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22e5251bae6293e2a3c01479f4360e6b46c1540c126660d63cdc519197c8266/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22e5251bae6293e2a3c01479f4360e6b46c1540c126660d63cdc519197c8266/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c22e5251bae6293e2a3c01479f4360e6b46c1540c126660d63cdc519197c8266/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:02 np0005596060 podman[77800]: 2026-01-26 17:40:02.46469709 +0000 UTC m=+0.031151850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:02 np0005596060 podman[77800]: 2026-01-26 17:40:02.560003897 +0000 UTC m=+0.126458657 container init 8fc78ea20cf2e564f201e5cb338df9eb9c9e35d6c1d48aad4da926fe188feeea (image=quay.io/ceph/ceph:v18, name=great_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:40:02 np0005596060 podman[77800]: 2026-01-26 17:40:02.567489603 +0000 UTC m=+0.133944343 container start 8fc78ea20cf2e564f201e5cb338df9eb9c9e35d6c1d48aad4da926fe188feeea (image=quay.io/ceph/ceph:v18, name=great_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 12:40:02 np0005596060 podman[77800]: 2026-01-26 17:40:02.571519202 +0000 UTC m=+0.137973962 container attach 8fc78ea20cf2e564f201e5cb338df9eb9c9e35d6c1d48aad4da926fe188feeea (image=quay.io/ceph/ceph:v18, name=great_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 12:40:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Jan 26 12:40:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3628147028' entity='client.admin' 
Jan 26 12:40:03 np0005596060 great_mcnulty[77817]: set mgr/dashboard/cluster/status
Jan 26 12:40:03 np0005596060 systemd[1]: libpod-8fc78ea20cf2e564f201e5cb338df9eb9c9e35d6c1d48aad4da926fe188feeea.scope: Deactivated successfully.
Jan 26 12:40:03 np0005596060 conmon[77817]: conmon 8fc78ea20cf2e564f201 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8fc78ea20cf2e564f201e5cb338df9eb9c9e35d6c1d48aad4da926fe188feeea.scope/container/memory.events
Jan 26 12:40:03 np0005596060 podman[77800]: 2026-01-26 17:40:03.225657885 +0000 UTC m=+0.792112625 container died 8fc78ea20cf2e564f201e5cb338df9eb9c9e35d6c1d48aad4da926fe188feeea (image=quay.io/ceph/ceph:v18, name=great_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:40:03 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c22e5251bae6293e2a3c01479f4360e6b46c1540c126660d63cdc519197c8266-merged.mount: Deactivated successfully.
Jan 26 12:40:03 np0005596060 podman[77800]: 2026-01-26 17:40:03.271593819 +0000 UTC m=+0.838048559 container remove 8fc78ea20cf2e564f201e5cb338df9eb9c9e35d6c1d48aad4da926fe188feeea (image=quay.io/ceph/ceph:v18, name=great_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:40:03 np0005596060 systemd[1]: libpod-conmon-8fc78ea20cf2e564f201e5cb338df9eb9c9e35d6c1d48aad4da926fe188feeea.scope: Deactivated successfully.
Jan 26 12:40:03 np0005596060 podman[77862]: 2026-01-26 17:40:03.446338886 +0000 UTC m=+0.041099167 container create 4373cbb2f382871c7fc68ebfb44368fae01b7ff089c9c13ebb6e9fb8d36ac291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 12:40:03 np0005596060 systemd[1]: Started libpod-conmon-4373cbb2f382871c7fc68ebfb44368fae01b7ff089c9c13ebb6e9fb8d36ac291.scope.
Jan 26 12:40:03 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1eab0234a4bd40ccfb08cf2dbcedfd9f63ec39251b2bb556041eb85d30dc0b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1eab0234a4bd40ccfb08cf2dbcedfd9f63ec39251b2bb556041eb85d30dc0b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1eab0234a4bd40ccfb08cf2dbcedfd9f63ec39251b2bb556041eb85d30dc0b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1eab0234a4bd40ccfb08cf2dbcedfd9f63ec39251b2bb556041eb85d30dc0b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:03 np0005596060 podman[77862]: 2026-01-26 17:40:03.42774596 +0000 UTC m=+0.022506271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:40:03 np0005596060 podman[77862]: 2026-01-26 17:40:03.528103113 +0000 UTC m=+0.122863394 container init 4373cbb2f382871c7fc68ebfb44368fae01b7ff089c9c13ebb6e9fb8d36ac291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 12:40:03 np0005596060 podman[77862]: 2026-01-26 17:40:03.539012805 +0000 UTC m=+0.133773076 container start 4373cbb2f382871c7fc68ebfb44368fae01b7ff089c9c13ebb6e9fb8d36ac291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:40:03 np0005596060 podman[77862]: 2026-01-26 17:40:03.541828919 +0000 UTC m=+0.136589230 container attach 4373cbb2f382871c7fc68ebfb44368fae01b7ff089c9c13ebb6e9fb8d36ac291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 26 12:40:03 np0005596060 python3[77908]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:40:03 np0005596060 ceph-mgr[74563]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 26 12:40:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:03 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 26 12:40:04 np0005596060 podman[77909]: 2026-01-26 17:40:04.033332613 +0000 UTC m=+0.057595236 container create 5cb251061b41e930e665ccf620db548696517ebc8c5a6ca8eed9a9a37cf6e8b4 (image=quay.io/ceph/ceph:v18, name=magical_shamir, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:40:04 np0005596060 systemd[1]: Started libpod-conmon-5cb251061b41e930e665ccf620db548696517ebc8c5a6ca8eed9a9a37cf6e8b4.scope.
Jan 26 12:40:04 np0005596060 podman[77909]: 2026-01-26 17:40:04.001111314 +0000 UTC m=+0.025373987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:04 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a351b4d3b53e6e88ade96fb62fd3c0c2363dc65cc9135aed2c1b4508557e325/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a351b4d3b53e6e88ade96fb62fd3c0c2363dc65cc9135aed2c1b4508557e325/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:04 np0005596060 podman[77909]: 2026-01-26 17:40:04.111787467 +0000 UTC m=+0.136050050 container init 5cb251061b41e930e665ccf620db548696517ebc8c5a6ca8eed9a9a37cf6e8b4 (image=quay.io/ceph/ceph:v18, name=magical_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:40:04 np0005596060 podman[77909]: 2026-01-26 17:40:04.119367577 +0000 UTC m=+0.143630200 container start 5cb251061b41e930e665ccf620db548696517ebc8c5a6ca8eed9a9a37cf6e8b4 (image=quay.io/ceph/ceph:v18, name=magical_shamir, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:40:04 np0005596060 podman[77909]: 2026-01-26 17:40:04.122823164 +0000 UTC m=+0.147085767 container attach 5cb251061b41e930e665ccf620db548696517ebc8c5a6ca8eed9a9a37cf6e8b4 (image=quay.io/ceph/ceph:v18, name=magical_shamir, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3628147028' entity='client.admin' 
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2699108443' entity='client.admin' 
Jan 26 12:40:04 np0005596060 systemd[1]: libpod-5cb251061b41e930e665ccf620db548696517ebc8c5a6ca8eed9a9a37cf6e8b4.scope: Deactivated successfully.
Jan 26 12:40:04 np0005596060 conmon[77924]: conmon 5cb251061b41e930e665 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5cb251061b41e930e665ccf620db548696517ebc8c5a6ca8eed9a9a37cf6e8b4.scope/container/memory.events
Jan 26 12:40:04 np0005596060 podman[77909]: 2026-01-26 17:40:04.714545085 +0000 UTC m=+0.738807668 container died 5cb251061b41e930e665ccf620db548696517ebc8c5a6ca8eed9a9a37cf6e8b4 (image=quay.io/ceph/ceph:v18, name=magical_shamir, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:40:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7a351b4d3b53e6e88ade96fb62fd3c0c2363dc65cc9135aed2c1b4508557e325-merged.mount: Deactivated successfully.
Jan 26 12:40:04 np0005596060 podman[77909]: 2026-01-26 17:40:04.753372188 +0000 UTC m=+0.777634771 container remove 5cb251061b41e930e665ccf620db548696517ebc8c5a6ca8eed9a9a37cf6e8b4 (image=quay.io/ceph/ceph:v18, name=magical_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:40:04 np0005596060 systemd[1]: libpod-conmon-5cb251061b41e930e665ccf620db548696517ebc8c5a6ca8eed9a9a37cf6e8b4.scope: Deactivated successfully.
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]: [
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:    {
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:        "available": false,
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:        "ceph_device": false,
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:        "lsm_data": {},
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:        "lvs": [],
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:        "path": "/dev/sr0",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:        "rejected_reasons": [
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "Has a FileSystem",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "Insufficient space (<5GB)"
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:        ],
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:        "sys_api": {
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "actuators": null,
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "device_nodes": "sr0",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "devname": "sr0",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "human_readable_size": "482.00 KB",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "id_bus": "ata",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "model": "QEMU DVD-ROM",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "nr_requests": "2",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "parent": "/dev/sr0",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "partitions": {},
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "path": "/dev/sr0",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "removable": "1",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "rev": "2.5+",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "ro": "0",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "rotational": "1",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "sas_address": "",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "sas_device_handle": "",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "scheduler_mode": "mq-deadline",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "sectors": 0,
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "sectorsize": "2048",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "size": 493568.0,
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "support_discard": "2048",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "type": "disk",
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:            "vendor": "QEMU"
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:        }
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]:    }
Jan 26 12:40:04 np0005596060 agitated_mayer[77878]: ]
Jan 26 12:40:04 np0005596060 systemd[1]: libpod-4373cbb2f382871c7fc68ebfb44368fae01b7ff089c9c13ebb6e9fb8d36ac291.scope: Deactivated successfully.
Jan 26 12:40:04 np0005596060 podman[77862]: 2026-01-26 17:40:04.798139479 +0000 UTC m=+1.392899790 container died 4373cbb2f382871c7fc68ebfb44368fae01b7ff089c9c13ebb6e9fb8d36ac291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:40:04 np0005596060 systemd[1]: libpod-4373cbb2f382871c7fc68ebfb44368fae01b7ff089c9c13ebb6e9fb8d36ac291.scope: Consumed 1.247s CPU time.
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a1eab0234a4bd40ccfb08cf2dbcedfd9f63ec39251b2bb556041eb85d30dc0b0-merged.mount: Deactivated successfully.
Jan 26 12:40:04 np0005596060 podman[77862]: 2026-01-26 17:40:04.861222487 +0000 UTC m=+1.455982808 container remove 4373cbb2f382871c7fc68ebfb44368fae01b7ff089c9c13ebb6e9fb8d36ac291 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_mayer, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:40:04 np0005596060 systemd[1]: libpod-conmon-4373cbb2f382871c7fc68ebfb44368fae01b7ff089c9c13ebb6e9fb8d36ac291.scope: Deactivated successfully.
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:40:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:40:04 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 26 12:40:04 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 26 12:40:05 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2699108443' entity='client.admin' 
Jan 26 12:40:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 12:40:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:40:05 np0005596060 ansible-async_wrapper.py[79526]: Invoked with j356485221647 30 /home/zuul/.ansible/tmp/ansible-tmp-1769449205.1344736-37217-610461697794/AnsiballZ_command.py _
Jan 26 12:40:05 np0005596060 ansible-async_wrapper.py[79581]: Starting module and watcher
Jan 26 12:40:05 np0005596060 ansible-async_wrapper.py[79581]: Start watching 79584 (30)
Jan 26 12:40:05 np0005596060 ansible-async_wrapper.py[79584]: Start module (79584)
Jan 26 12:40:05 np0005596060 ansible-async_wrapper.py[79526]: Return async_wrapper task started.
Jan 26 12:40:05 np0005596060 python3[79587]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:40:05 np0005596060 podman[79655]: 2026-01-26 17:40:05.9460315 +0000 UTC m=+0.048205940 container create 32b9d601c972e714df001f4c32ba9b2d85830a93bf34dd0f489f182bdaa12dd0 (image=quay.io/ceph/ceph:v18, name=hardcore_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 12:40:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:05 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:40:05 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:40:05 np0005596060 systemd[1]: Started libpod-conmon-32b9d601c972e714df001f4c32ba9b2d85830a93bf34dd0f489f182bdaa12dd0.scope.
Jan 26 12:40:06 np0005596060 podman[79655]: 2026-01-26 17:40:05.924403992 +0000 UTC m=+0.026578462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:06 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7619357c1cd5bed1ec33b31f0e79c3a233ac92642005693de545d4d2f9d509e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7619357c1cd5bed1ec33b31f0e79c3a233ac92642005693de545d4d2f9d509e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:06 np0005596060 podman[79655]: 2026-01-26 17:40:06.049817459 +0000 UTC m=+0.151991929 container init 32b9d601c972e714df001f4c32ba9b2d85830a93bf34dd0f489f182bdaa12dd0 (image=quay.io/ceph/ceph:v18, name=hardcore_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 12:40:06 np0005596060 podman[79655]: 2026-01-26 17:40:06.058663075 +0000 UTC m=+0.160837515 container start 32b9d601c972e714df001f4c32ba9b2d85830a93bf34dd0f489f182bdaa12dd0 (image=quay.io/ceph/ceph:v18, name=hardcore_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 12:40:06 np0005596060 podman[79655]: 2026-01-26 17:40:06.062432984 +0000 UTC m=+0.164607424 container attach 32b9d601c972e714df001f4c32ba9b2d85830a93bf34dd0f489f182bdaa12dd0 (image=quay.io/ceph/ceph:v18, name=hardcore_vaughan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:40:06 np0005596060 ceph-mon[74267]: Updating compute-0:/etc/ceph/ceph.conf
Jan 26 12:40:06 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 12:40:06 np0005596060 hardcore_vaughan[79709]: 
Jan 26 12:40:06 np0005596060 hardcore_vaughan[79709]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 26 12:40:06 np0005596060 systemd[1]: libpod-32b9d601c972e714df001f4c32ba9b2d85830a93bf34dd0f489f182bdaa12dd0.scope: Deactivated successfully.
Jan 26 12:40:06 np0005596060 podman[79655]: 2026-01-26 17:40:06.655602829 +0000 UTC m=+0.757777269 container died 32b9d601c972e714df001f4c32ba9b2d85830a93bf34dd0f489f182bdaa12dd0 (image=quay.io/ceph/ceph:v18, name=hardcore_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 12:40:06 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7619357c1cd5bed1ec33b31f0e79c3a233ac92642005693de545d4d2f9d509e7-merged.mount: Deactivated successfully.
Jan 26 12:40:06 np0005596060 podman[79655]: 2026-01-26 17:40:06.698660497 +0000 UTC m=+0.800834937 container remove 32b9d601c972e714df001f4c32ba9b2d85830a93bf34dd0f489f182bdaa12dd0 (image=quay.io/ceph/ceph:v18, name=hardcore_vaughan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 12:40:06 np0005596060 systemd[1]: libpod-conmon-32b9d601c972e714df001f4c32ba9b2d85830a93bf34dd0f489f182bdaa12dd0.scope: Deactivated successfully.
Jan 26 12:40:06 np0005596060 ansible-async_wrapper.py[79584]: Module complete (79584)
Jan 26 12:40:07 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 12:40:07 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 12:40:07 np0005596060 python3[80229]: ansible-ansible.legacy.async_status Invoked with jid=j356485221647.79526 mode=status _async_dir=/root/.ansible_async
Jan 26 12:40:07 np0005596060 ceph-mon[74267]: Updating compute-0:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:40:07 np0005596060 python3[80403]: ansible-ansible.legacy.async_status Invoked with jid=j356485221647.79526 mode=cleanup _async_dir=/root/.ansible_async
Jan 26 12:40:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:08 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.client.admin.keyring
Jan 26 12:40:08 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.client.admin.keyring
Jan 26 12:40:08 np0005596060 python3[80653]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 26 12:40:08 np0005596060 python3[80857]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:40:08 np0005596060 podman[80916]: 2026-01-26 17:40:08.592896137 +0000 UTC m=+0.026939793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:08 np0005596060 ceph-mon[74267]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 26 12:40:08 np0005596060 podman[80916]: 2026-01-26 17:40:08.888643456 +0000 UTC m=+0.322687102 container create dc1a55b5812aed4594be4c27b4a673eaae6b5164a5a86dd06cd8648d9f07c854 (image=quay.io/ceph/ceph:v18, name=cool_allen, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:40:08 np0005596060 systemd[1]: Started libpod-conmon-dc1a55b5812aed4594be4c27b4a673eaae6b5164a5a86dd06cd8648d9f07c854.scope.
Jan 26 12:40:08 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a00c913da7f82c185057c67d801c26b339b6bc2955f4fafe39299a710fa17d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a00c913da7f82c185057c67d801c26b339b6bc2955f4fafe39299a710fa17d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98a00c913da7f82c185057c67d801c26b339b6bc2955f4fafe39299a710fa17d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:08 np0005596060 podman[80916]: 2026-01-26 17:40:08.986624498 +0000 UTC m=+0.420668124 container init dc1a55b5812aed4594be4c27b4a673eaae6b5164a5a86dd06cd8648d9f07c854 (image=quay.io/ceph/ceph:v18, name=cool_allen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:40:08 np0005596060 podman[80916]: 2026-01-26 17:40:08.996298853 +0000 UTC m=+0.430342479 container start dc1a55b5812aed4594be4c27b4a673eaae6b5164a5a86dd06cd8648d9f07c854 (image=quay.io/ceph/ceph:v18, name=cool_allen, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:40:08 np0005596060 podman[80916]: 2026-01-26 17:40:08.999610647 +0000 UTC m=+0.433654253 container attach dc1a55b5812aed4594be4c27b4a673eaae6b5164a5a86dd06cd8648d9f07c854 (image=quay.io/ceph/ceph:v18, name=cool_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:09 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 8375f457-4ea5-4637-bcc7-1eb1ba173ff6 (Updating crash deployment (+1 -> 1))
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:09 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 26 12:40:09 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 26 12:40:09 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 12:40:09 np0005596060 cool_allen[81071]: 
Jan 26 12:40:09 np0005596060 cool_allen[81071]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 26 12:40:09 np0005596060 systemd[1]: libpod-dc1a55b5812aed4594be4c27b4a673eaae6b5164a5a86dd06cd8648d9f07c854.scope: Deactivated successfully.
Jan 26 12:40:09 np0005596060 podman[80916]: 2026-01-26 17:40:09.568933702 +0000 UTC m=+1.002977328 container died dc1a55b5812aed4594be4c27b4a673eaae6b5164a5a86dd06cd8648d9f07c854 (image=quay.io/ceph/ceph:v18, name=cool_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:40:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-98a00c913da7f82c185057c67d801c26b339b6bc2955f4fafe39299a710fa17d-merged.mount: Deactivated successfully.
Jan 26 12:40:09 np0005596060 podman[80916]: 2026-01-26 17:40:09.615407599 +0000 UTC m=+1.049451205 container remove dc1a55b5812aed4594be4c27b4a673eaae6b5164a5a86dd06cd8648d9f07c854 (image=quay.io/ceph/ceph:v18, name=cool_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 12:40:09 np0005596060 systemd[1]: libpod-conmon-dc1a55b5812aed4594be4c27b4a673eaae6b5164a5a86dd06cd8648d9f07c854.scope: Deactivated successfully.
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: Updating compute-0:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.client.admin.keyring
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 12:40:09 np0005596060 ceph-mon[74267]: Deploying daemon crash.compute-0 on compute-0
Jan 26 12:40:09 np0005596060 podman[81345]: 2026-01-26 17:40:09.89778972 +0000 UTC m=+0.047907755 container create e6354776b20a451ea0336c24d6f4454efb93123cb7c4d9357c98f4f7eb4d0b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 12:40:09 np0005596060 systemd[1]: Started libpod-conmon-e6354776b20a451ea0336c24d6f4454efb93123cb7c4d9357c98f4f7eb4d0b85.scope.
Jan 26 12:40:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:09 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:09 np0005596060 podman[81345]: 2026-01-26 17:40:09.875016003 +0000 UTC m=+0.025134018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:40:09 np0005596060 podman[81345]: 2026-01-26 17:40:09.972115832 +0000 UTC m=+0.122233837 container init e6354776b20a451ea0336c24d6f4454efb93123cb7c4d9357c98f4f7eb4d0b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 12:40:09 np0005596060 podman[81345]: 2026-01-26 17:40:09.9775687 +0000 UTC m=+0.127686685 container start e6354776b20a451ea0336c24d6f4454efb93123cb7c4d9357c98f4f7eb4d0b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 12:40:09 np0005596060 stoic_sanderson[81361]: 167 167
Jan 26 12:40:09 np0005596060 systemd[1]: libpod-e6354776b20a451ea0336c24d6f4454efb93123cb7c4d9357c98f4f7eb4d0b85.scope: Deactivated successfully.
Jan 26 12:40:09 np0005596060 podman[81345]: 2026-01-26 17:40:09.980777021 +0000 UTC m=+0.130895026 container attach e6354776b20a451ea0336c24d6f4454efb93123cb7c4d9357c98f4f7eb4d0b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:40:09 np0005596060 podman[81345]: 2026-01-26 17:40:09.982798332 +0000 UTC m=+0.132916317 container died e6354776b20a451ea0336c24d6f4454efb93123cb7c4d9357c98f4f7eb4d0b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 12:40:10 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f1ba69a7ee1c7cc1b8a57e1ea849f16661b86b7aa7aa2c795f78f9413e918cb1-merged.mount: Deactivated successfully.
Jan 26 12:40:10 np0005596060 podman[81345]: 2026-01-26 17:40:10.019244705 +0000 UTC m=+0.169362690 container remove e6354776b20a451ea0336c24d6f4454efb93123cb7c4d9357c98f4f7eb4d0b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 12:40:10 np0005596060 systemd[1]: libpod-conmon-e6354776b20a451ea0336c24d6f4454efb93123cb7c4d9357c98f4f7eb4d0b85.scope: Deactivated successfully.
Jan 26 12:40:10 np0005596060 systemd[1]: Reloading.
Jan 26 12:40:10 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:40:10 np0005596060 python3[81398]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:40:10 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:40:10 np0005596060 podman[81442]: 2026-01-26 17:40:10.205162033 +0000 UTC m=+0.050469959 container create f2cba1d903b86d39a2b75c98d017f53c9d36ed004de8d01614c5dc7a38c5e0d0 (image=quay.io/ceph/ceph:v18, name=sharp_swanson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:40:10 np0005596060 podman[81442]: 2026-01-26 17:40:10.183977466 +0000 UTC m=+0.029285412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:10 np0005596060 systemd[1]: Started libpod-conmon-f2cba1d903b86d39a2b75c98d017f53c9d36ed004de8d01614c5dc7a38c5e0d0.scope.
Jan 26 12:40:10 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0414b2c92c09c3682ad433a3a5c87f0fbabff8d540f2369fde89e2da01129128/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0414b2c92c09c3682ad433a3a5c87f0fbabff8d540f2369fde89e2da01129128/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0414b2c92c09c3682ad433a3a5c87f0fbabff8d540f2369fde89e2da01129128/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:10 np0005596060 systemd[1]: Reloading.
Jan 26 12:40:10 np0005596060 podman[81442]: 2026-01-26 17:40:10.375916596 +0000 UTC m=+0.221224552 container init f2cba1d903b86d39a2b75c98d017f53c9d36ed004de8d01614c5dc7a38c5e0d0 (image=quay.io/ceph/ceph:v18, name=sharp_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 12:40:10 np0005596060 podman[81442]: 2026-01-26 17:40:10.38476948 +0000 UTC m=+0.230077406 container start f2cba1d903b86d39a2b75c98d017f53c9d36ed004de8d01614c5dc7a38c5e0d0 (image=quay.io/ceph/ceph:v18, name=sharp_swanson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 12:40:10 np0005596060 podman[81442]: 2026-01-26 17:40:10.389557591 +0000 UTC m=+0.234865517 container attach f2cba1d903b86d39a2b75c98d017f53c9d36ed004de8d01614c5dc7a38c5e0d0 (image=quay.io/ceph/ceph:v18, name=sharp_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:40:10 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:40:10 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:40:10 np0005596060 systemd[1]: Starting Ceph crash.compute-0 for d4cd1917-5876-51b6-bc64-65a16199754d...
Jan 26 12:40:10 np0005596060 ansible-async_wrapper.py[79581]: Done in kid B.
Jan 26 12:40:10 np0005596060 podman[81567]: 2026-01-26 17:40:10.908306596 +0000 UTC m=+0.045112054 container create 2653e44b26a1bfde8f7b4d0913c44a7afc6f0ca9739a6a8444982e8bd918b8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 12:40:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5cc46261ff3ce76c413f0b4a71ec4e81966d63d308b91be6d51019165540d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5cc46261ff3ce76c413f0b4a71ec4e81966d63d308b91be6d51019165540d2/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5cc46261ff3ce76c413f0b4a71ec4e81966d63d308b91be6d51019165540d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5cc46261ff3ce76c413f0b4a71ec4e81966d63d308b91be6d51019165540d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:10 np0005596060 podman[81567]: 2026-01-26 17:40:10.978518733 +0000 UTC m=+0.115324201 container init 2653e44b26a1bfde8f7b4d0913c44a7afc6f0ca9739a6a8444982e8bd918b8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:40:10 np0005596060 podman[81567]: 2026-01-26 17:40:10.883754734 +0000 UTC m=+0.020560212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:40:10 np0005596060 podman[81567]: 2026-01-26 17:40:10.984396752 +0000 UTC m=+0.121202200 container start 2653e44b26a1bfde8f7b4d0913c44a7afc6f0ca9739a6a8444982e8bd918b8ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:40:10 np0005596060 bash[81567]: 2653e44b26a1bfde8f7b4d0913c44a7afc6f0ca9739a6a8444982e8bd918b8ad
Jan 26 12:40:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Jan 26 12:40:10 np0005596060 systemd[1]: Started Ceph crash.compute-0 for d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/555984229' entity='client.admin' 
Jan 26 12:40:11 np0005596060 systemd[1]: libpod-f2cba1d903b86d39a2b75c98d017f53c9d36ed004de8d01614c5dc7a38c5e0d0.scope: Deactivated successfully.
Jan 26 12:40:11 np0005596060 podman[81442]: 2026-01-26 17:40:11.02065205 +0000 UTC m=+0.865959986 container died f2cba1d903b86d39a2b75c98d017f53c9d36ed004de8d01614c5dc7a38c5e0d0 (image=quay.io/ceph/ceph:v18, name=sharp_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0414b2c92c09c3682ad433a3a5c87f0fbabff8d540f2369fde89e2da01129128-merged.mount: Deactivated successfully.
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:11 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 8375f457-4ea5-4637-bcc7-1eb1ba173ff6 (Updating crash deployment (+1 -> 1))
Jan 26 12:40:11 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 8375f457-4ea5-4637-bcc7-1eb1ba173ff6 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 26 12:40:11 np0005596060 podman[81442]: 2026-01-26 17:40:11.077865509 +0000 UTC m=+0.923173445 container remove f2cba1d903b86d39a2b75c98d017f53c9d36ed004de8d01614c5dc7a38c5e0d0 (image=quay.io/ceph/ceph:v18, name=sharp_swanson, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 469eae75-5e24-4e25-a07c-357d20643d61 does not exist
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 26 12:40:11 np0005596060 systemd[1]: libpod-conmon-f2cba1d903b86d39a2b75c98d017f53c9d36ed004de8d01614c5dc7a38c5e0d0.scope: Deactivated successfully.
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 001158e1-ed67-4aca-955d-2bc20a9701af does not exist
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 26 12:40:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0[81582]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 26 12:40:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0[81582]: 2026-01-26T17:40:11.399+0000 7f970721a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 26 12:40:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0[81582]: 2026-01-26T17:40:11.399+0000 7f970721a640 -1 AuthRegistry(0x7f9700066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 26 12:40:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0[81582]: 2026-01-26T17:40:11.400+0000 7f970721a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 26 12:40:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0[81582]: 2026-01-26T17:40:11.400+0000 7f970721a640 -1 AuthRegistry(0x7f9707219000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 26 12:40:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0[81582]: 2026-01-26T17:40:11.401+0000 7f9704f8f640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 26 12:40:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0[81582]: 2026-01-26T17:40:11.401+0000 7f970721a640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 26 12:40:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0[81582]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 26 12:40:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-crash-compute-0[81582]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 26 12:40:11 np0005596060 python3[81703]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:40:11 np0005596060 podman[81769]: 2026-01-26 17:40:11.504924732 +0000 UTC m=+0.044827506 container create 425613d50f044e31893d9bba64d5fee005fb4da51ee17019e33978e60a4c7333 (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:40:11 np0005596060 systemd[1]: Started libpod-conmon-425613d50f044e31893d9bba64d5fee005fb4da51ee17019e33978e60a4c7333.scope.
Jan 26 12:40:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:11 np0005596060 podman[81769]: 2026-01-26 17:40:11.484388572 +0000 UTC m=+0.024291386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ff1f1d710aa48e713336205a8e3f57280c29c915265f8b8a6f30141623ca0db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ff1f1d710aa48e713336205a8e3f57280c29c915265f8b8a6f30141623ca0db/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ff1f1d710aa48e713336205a8e3f57280c29c915265f8b8a6f30141623ca0db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:11 np0005596060 podman[81769]: 2026-01-26 17:40:11.602332799 +0000 UTC m=+0.142235613 container init 425613d50f044e31893d9bba64d5fee005fb4da51ee17019e33978e60a4c7333 (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 12:40:11 np0005596060 podman[81769]: 2026-01-26 17:40:11.608926076 +0000 UTC m=+0.148828840 container start 425613d50f044e31893d9bba64d5fee005fb4da51ee17019e33978e60a4c7333 (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:40:11 np0005596060 podman[81769]: 2026-01-26 17:40:11.61226872 +0000 UTC m=+0.152171524 container attach 425613d50f044e31893d9bba64d5fee005fb4da51ee17019e33978e60a4c7333 (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 12:40:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/555984229' entity='client.admin' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 podman[81902]: 2026-01-26 17:40:12.237156803 +0000 UTC m=+0.240224424 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2702934706' entity='client.admin' 
Jan 26 12:40:12 np0005596060 systemd[1]: libpod-425613d50f044e31893d9bba64d5fee005fb4da51ee17019e33978e60a4c7333.scope: Deactivated successfully.
Jan 26 12:40:12 np0005596060 podman[81769]: 2026-01-26 17:40:12.273292018 +0000 UTC m=+0.813194782 container died 425613d50f044e31893d9bba64d5fee005fb4da51ee17019e33978e60a4c7333 (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:40:12 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1ff1f1d710aa48e713336205a8e3f57280c29c915265f8b8a6f30141623ca0db-merged.mount: Deactivated successfully.
Jan 26 12:40:12 np0005596060 podman[81902]: 2026-01-26 17:40:12.417681874 +0000 UTC m=+0.420749465 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 26 12:40:12 np0005596060 podman[81769]: 2026-01-26 17:40:12.444442631 +0000 UTC m=+0.984345435 container remove 425613d50f044e31893d9bba64d5fee005fb4da51ee17019e33978e60a4c7333 (image=quay.io/ceph/ceph:v18, name=sleepy_galileo, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 12:40:12 np0005596060 systemd[1]: libpod-conmon-425613d50f044e31893d9bba64d5fee005fb4da51ee17019e33978e60a4c7333.scope: Deactivated successfully.
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4b93d3c4-0c1e-4e96-a4de-c1b0967d849f does not exist
Jan 26 12:40:12 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 98368ab0-6146-4a01-98c8-60c68df0f486 does not exist
Jan 26 12:40:12 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 22f0cee3-48e4-4c31-9e9f-d4011371e997 does not exist
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:12 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 26 12:40:12 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:12 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 12:40:12 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 12:40:12 np0005596060 python3[82031]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:40:12 np0005596060 podman[82072]: 2026-01-26 17:40:12.910578924 +0000 UTC m=+0.058181534 container create 7c652c151a7eb253b5100ee6c01dbd6487bda930df164a1c1070c91b6bef15c4 (image=quay.io/ceph/ceph:v18, name=keen_jemison, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 12:40:12 np0005596060 systemd[1]: Started libpod-conmon-7c652c151a7eb253b5100ee6c01dbd6487bda930df164a1c1070c91b6bef15c4.scope.
Jan 26 12:40:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b50f802a6f5130ee80c3ec7a3f47ceb7d20d9e8798f42b15b2f82939073d8072/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b50f802a6f5130ee80c3ec7a3f47ceb7d20d9e8798f42b15b2f82939073d8072/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b50f802a6f5130ee80c3ec7a3f47ceb7d20d9e8798f42b15b2f82939073d8072/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:12 np0005596060 podman[82072]: 2026-01-26 17:40:12.88946515 +0000 UTC m=+0.037067800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:12 np0005596060 podman[82072]: 2026-01-26 17:40:12.987092182 +0000 UTC m=+0.134694812 container init 7c652c151a7eb253b5100ee6c01dbd6487bda930df164a1c1070c91b6bef15c4 (image=quay.io/ceph/ceph:v18, name=keen_jemison, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 12:40:12 np0005596060 podman[82072]: 2026-01-26 17:40:12.994291544 +0000 UTC m=+0.141894154 container start 7c652c151a7eb253b5100ee6c01dbd6487bda930df164a1c1070c91b6bef15c4 (image=quay.io/ceph/ceph:v18, name=keen_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:40:12 np0005596060 podman[82072]: 2026-01-26 17:40:12.997414093 +0000 UTC m=+0.145016713 container attach 7c652c151a7eb253b5100ee6c01dbd6487bda930df164a1c1070c91b6bef15c4 (image=quay.io/ceph/ceph:v18, name=keen_jemison, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2702934706' entity='client.admin' 
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:40:13 np0005596060 podman[82193]: 2026-01-26 17:40:13.292427083 +0000 UTC m=+0.040065296 container create 2aab3e7e071069d92584ef8b3fbb97464187c9385bd46978228b4878404ef95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 12:40:13 np0005596060 systemd[1]: Started libpod-conmon-2aab3e7e071069d92584ef8b3fbb97464187c9385bd46978228b4878404ef95f.scope.
Jan 26 12:40:13 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:13 np0005596060 podman[82193]: 2026-01-26 17:40:13.274635362 +0000 UTC m=+0.022273595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:40:13 np0005596060 podman[82193]: 2026-01-26 17:40:13.502706977 +0000 UTC m=+0.250345240 container init 2aab3e7e071069d92584ef8b3fbb97464187c9385bd46978228b4878404ef95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_solomon, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:40:13 np0005596060 podman[82193]: 2026-01-26 17:40:13.513615963 +0000 UTC m=+0.261254206 container start 2aab3e7e071069d92584ef8b3fbb97464187c9385bd46978228b4878404ef95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_solomon, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:40:13 np0005596060 amazing_solomon[82211]: 167 167
Jan 26 12:40:13 np0005596060 systemd[1]: libpod-2aab3e7e071069d92584ef8b3fbb97464187c9385bd46978228b4878404ef95f.scope: Deactivated successfully.
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Jan 26 12:40:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2243992043' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 26 12:40:13 np0005596060 podman[82193]: 2026-01-26 17:40:13.583365549 +0000 UTC m=+0.331003762 container attach 2aab3e7e071069d92584ef8b3fbb97464187c9385bd46978228b4878404ef95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 12:40:13 np0005596060 podman[82193]: 2026-01-26 17:40:13.584526239 +0000 UTC m=+0.332164462 container died 2aab3e7e071069d92584ef8b3fbb97464187c9385bd46978228b4878404ef95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 12:40:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:40:14 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 1 completed events
Jan 26 12:40:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:40:14 np0005596060 systemd[1]: var-lib-containers-storage-overlay-20219e469fb789bf023571c86b73256fe1f6420fc5b46f99dd5bfa30404b7772-merged.mount: Deactivated successfully.
Jan 26 12:40:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 26 12:40:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:40:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2243992043' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 26 12:40:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 26 12:40:14 np0005596060 keen_jemison[82123]: set require_min_compat_client to mimic
Jan 26 12:40:14 np0005596060 systemd[1]: libpod-7c652c151a7eb253b5100ee6c01dbd6487bda930df164a1c1070c91b6bef15c4.scope: Deactivated successfully.
Jan 26 12:40:14 np0005596060 ceph-mon[74267]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 26 12:40:14 np0005596060 ceph-mon[74267]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 12:40:14 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2243992043' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 26 12:40:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:15 np0005596060 podman[82193]: 2026-01-26 17:40:15.079321907 +0000 UTC m=+1.826960160 container remove 2aab3e7e071069d92584ef8b3fbb97464187c9385bd46978228b4878404ef95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:15 np0005596060 systemd[1]: libpod-conmon-2aab3e7e071069d92584ef8b3fbb97464187c9385bd46978228b4878404ef95f.scope: Deactivated successfully.
Jan 26 12:40:15 np0005596060 podman[82072]: 2026-01-26 17:40:15.118348345 +0000 UTC m=+2.265951035 container died 7c652c151a7eb253b5100ee6c01dbd6487bda930df164a1c1070c91b6bef15c4 (image=quay.io/ceph/ceph:v18, name=keen_jemison, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:40:15 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b50f802a6f5130ee80c3ec7a3f47ceb7d20d9e8798f42b15b2f82939073d8072-merged.mount: Deactivated successfully.
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:40:15 np0005596060 podman[82072]: 2026-01-26 17:40:15.745647918 +0000 UTC m=+2.893250518 container remove 7c652c151a7eb253b5100ee6c01dbd6487bda930df164a1c1070c91b6bef15c4 (image=quay.io/ceph/ceph:v18, name=keen_jemison, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:15 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mbryrf (unknown last config time)...
Jan 26 12:40:15 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mbryrf (unknown last config time)...
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mbryrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mbryrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:15 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mbryrf on compute-0
Jan 26 12:40:15 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mbryrf on compute-0
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2243992043' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mbryrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 12:40:15 np0005596060 systemd[1]: libpod-conmon-7c652c151a7eb253b5100ee6c01dbd6487bda930df164a1c1070c91b6bef15c4.scope: Deactivated successfully.
Jan 26 12:40:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:16 np0005596060 python3[82402]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:40:16 np0005596060 podman[82405]: 2026-01-26 17:40:16.296326652 +0000 UTC m=+0.026493402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:40:16 np0005596060 podman[82405]: 2026-01-26 17:40:16.513020379 +0000 UTC m=+0.243187109 container create 0899054e5adc6e078741560aa359f031db18bbec8d6f08907e2a830276ff9df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_torvalds, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:40:16 np0005596060 systemd[1]: Started libpod-conmon-0899054e5adc6e078741560aa359f031db18bbec8d6f08907e2a830276ff9df7.scope.
Jan 26 12:40:16 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:16 np0005596060 podman[82419]: 2026-01-26 17:40:16.744453639 +0000 UTC m=+0.358525449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:16 np0005596060 podman[82405]: 2026-01-26 17:40:16.921994024 +0000 UTC m=+0.652160844 container init 0899054e5adc6e078741560aa359f031db18bbec8d6f08907e2a830276ff9df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 12:40:16 np0005596060 podman[82405]: 2026-01-26 17:40:16.935293791 +0000 UTC m=+0.665460551 container start 0899054e5adc6e078741560aa359f031db18bbec8d6f08907e2a830276ff9df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_torvalds, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 12:40:16 np0005596060 optimistic_torvalds[82433]: 167 167
Jan 26 12:40:16 np0005596060 systemd[1]: libpod-0899054e5adc6e078741560aa359f031db18bbec8d6f08907e2a830276ff9df7.scope: Deactivated successfully.
Jan 26 12:40:16 np0005596060 ceph-mon[74267]: Reconfiguring mgr.compute-0.mbryrf (unknown last config time)...
Jan 26 12:40:16 np0005596060 ceph-mon[74267]: Reconfiguring daemon mgr.compute-0.mbryrf on compute-0
Jan 26 12:40:16 np0005596060 podman[82405]: 2026-01-26 17:40:16.973661232 +0000 UTC m=+0.703828002 container attach 0899054e5adc6e078741560aa359f031db18bbec8d6f08907e2a830276ff9df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_torvalds, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 12:40:16 np0005596060 podman[82405]: 2026-01-26 17:40:16.974268897 +0000 UTC m=+0.704435727 container died 0899054e5adc6e078741560aa359f031db18bbec8d6f08907e2a830276ff9df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 26 12:40:17 np0005596060 podman[82419]: 2026-01-26 17:40:17.004298408 +0000 UTC m=+0.618370138 container create 121824f485514f455ed0cc5a8c516c48b2ebe13ba80f371d47c2961778215524 (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 12:40:17 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b5b70b97fcdf449907b0738bb6fc3c4953194cd7a1594e70e853f22e01d0f506-merged.mount: Deactivated successfully.
Jan 26 12:40:17 np0005596060 podman[82405]: 2026-01-26 17:40:17.037921349 +0000 UTC m=+0.768088069 container remove 0899054e5adc6e078741560aa359f031db18bbec8d6f08907e2a830276ff9df7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_torvalds, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:40:17 np0005596060 systemd[1]: Started libpod-conmon-121824f485514f455ed0cc5a8c516c48b2ebe13ba80f371d47c2961778215524.scope.
Jan 26 12:40:17 np0005596060 systemd[1]: libpod-conmon-0899054e5adc6e078741560aa359f031db18bbec8d6f08907e2a830276ff9df7.scope: Deactivated successfully.
Jan 26 12:40:17 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8a6f12aca4896f1a4d4d1ad68c0150cff4c436f606f29e6654d2d0c3a23e78/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8a6f12aca4896f1a4d4d1ad68c0150cff4c436f606f29e6654d2d0c3a23e78/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae8a6f12aca4896f1a4d4d1ad68c0150cff4c436f606f29e6654d2d0c3a23e78/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:17 np0005596060 podman[82419]: 2026-01-26 17:40:17.267262416 +0000 UTC m=+0.881334226 container init 121824f485514f455ed0cc5a8c516c48b2ebe13ba80f371d47c2961778215524 (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Jan 26 12:40:17 np0005596060 podman[82419]: 2026-01-26 17:40:17.280523292 +0000 UTC m=+0.894595012 container start 121824f485514f455ed0cc5a8c516c48b2ebe13ba80f371d47c2961778215524 (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:40:17 np0005596060 podman[82419]: 2026-01-26 17:40:17.285148099 +0000 UTC m=+0.899219899 container attach 121824f485514f455ed0cc5a8c516c48b2ebe13ba80f371d47c2961778215524 (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 12:40:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:40:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:40:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:40:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:40:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:40:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:17 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b5ec4ea6-6381-40b1-8c2b-af5cfff921ba does not exist
Jan 26 12:40:17 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8414833f-ed10-491b-ad72-6d8be0283706 does not exist
Jan 26 12:40:17 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 039ef3af-d6be-4b42-abe1-b49c528bcfff does not exist
Jan 26 12:40:17 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:40:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 26 12:40:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:40:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Added host compute-0
Jan 26 12:40:19 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 72de7109-9656-42ad-bd67-bb0a10fcd5c2 does not exist
Jan 26 12:40:19 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 59aaf823-c35e-4edf-818b-b4ae77c2618a does not exist
Jan 26 12:40:19 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 26a0c23a-cf3e-4df5-95a5-103171ca7729 does not exist
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: Added host compute-0
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:40:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:20 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 26 12:40:20 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 26 12:40:21 np0005596060 ceph-mon[74267]: Deploying cephadm binary to compute-1
Jan 26 12:40:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 26 12:40:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:25 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Added host compute-1
Jan 26 12:40:25 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 26 12:40:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:40:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:26 np0005596060 ceph-mon[74267]: Added host compute-1
Jan 26 12:40:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:40:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:26 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 26 12:40:26 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 26 12:40:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:28 np0005596060 ceph-mon[74267]: Deploying cephadm binary to compute-2
Jan 26 12:40:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:40:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 26 12:40:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Added host compute-2
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 26 12:40:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 26 12:40:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 26 12:40:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 26 12:40:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 26 12:40:30 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 26 12:40:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Jan 26 12:40:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:30 np0005596060 thirsty_gauss[82455]: Added host 'compute-0' with addr '192.168.122.100'
Jan 26 12:40:30 np0005596060 thirsty_gauss[82455]: Added host 'compute-1' with addr '192.168.122.101'
Jan 26 12:40:30 np0005596060 thirsty_gauss[82455]: Added host 'compute-2' with addr '192.168.122.102'
Jan 26 12:40:30 np0005596060 thirsty_gauss[82455]: Scheduled mon update...
Jan 26 12:40:30 np0005596060 thirsty_gauss[82455]: Scheduled mgr update...
Jan 26 12:40:30 np0005596060 thirsty_gauss[82455]: Scheduled osd.default_drive_group update...
Jan 26 12:40:31 np0005596060 systemd[1]: libpod-121824f485514f455ed0cc5a8c516c48b2ebe13ba80f371d47c2961778215524.scope: Deactivated successfully.
Jan 26 12:40:31 np0005596060 podman[82419]: 2026-01-26 17:40:31.009388654 +0000 UTC m=+14.623460384 container died 121824f485514f455ed0cc5a8c516c48b2ebe13ba80f371d47c2961778215524 (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 12:40:31 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ae8a6f12aca4896f1a4d4d1ad68c0150cff4c436f606f29e6654d2d0c3a23e78-merged.mount: Deactivated successfully.
Jan 26 12:40:31 np0005596060 podman[82419]: 2026-01-26 17:40:31.126598439 +0000 UTC m=+14.740670149 container remove 121824f485514f455ed0cc5a8c516c48b2ebe13ba80f371d47c2961778215524 (image=quay.io/ceph/ceph:v18, name=thirsty_gauss, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 12:40:31 np0005596060 systemd[1]: libpod-conmon-121824f485514f455ed0cc5a8c516c48b2ebe13ba80f371d47c2961778215524.scope: Deactivated successfully.
Jan 26 12:40:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:31 np0005596060 python3[82738]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:40:31 np0005596060 podman[82740]: 2026-01-26 17:40:31.639908274 +0000 UTC m=+0.024661299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:40:31 np0005596060 podman[82740]: 2026-01-26 17:40:31.750862251 +0000 UTC m=+0.135615246 container create 630101bb05f67b31d3c311391fff1c59a96da95aa90512aaa99556042057ea2d (image=quay.io/ceph/ceph:v18, name=elated_jepsen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:40:31 np0005596060 systemd[1]: Started libpod-conmon-630101bb05f67b31d3c311391fff1c59a96da95aa90512aaa99556042057ea2d.scope.
Jan 26 12:40:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e203ef02c013ef7cbe82a0637e0ca8abd8641b7eb3996cc39990831527c5ae42/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e203ef02c013ef7cbe82a0637e0ca8abd8641b7eb3996cc39990831527c5ae42/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e203ef02c013ef7cbe82a0637e0ca8abd8641b7eb3996cc39990831527c5ae42/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:31 np0005596060 podman[82740]: 2026-01-26 17:40:31.913726878 +0000 UTC m=+0.298479893 container init 630101bb05f67b31d3c311391fff1c59a96da95aa90512aaa99556042057ea2d (image=quay.io/ceph/ceph:v18, name=elated_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:40:31 np0005596060 podman[82740]: 2026-01-26 17:40:31.921684861 +0000 UTC m=+0.306437846 container start 630101bb05f67b31d3c311391fff1c59a96da95aa90512aaa99556042057ea2d (image=quay.io/ceph/ceph:v18, name=elated_jepsen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:40:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:32 np0005596060 podman[82740]: 2026-01-26 17:40:32.148822166 +0000 UTC m=+0.533575261 container attach 630101bb05f67b31d3c311391fff1c59a96da95aa90512aaa99556042057ea2d (image=quay.io/ceph/ceph:v18, name=elated_jepsen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 12:40:32 np0005596060 ceph-mon[74267]: Added host compute-2
Jan 26 12:40:32 np0005596060 ceph-mon[74267]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 26 12:40:32 np0005596060 ceph-mon[74267]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 26 12:40:32 np0005596060 ceph-mon[74267]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 26 12:40:32 np0005596060 ceph-mon[74267]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 26 12:40:32 np0005596060 ceph-mon[74267]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 26 12:40:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 26 12:40:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222532491' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 12:40:32 np0005596060 elated_jepsen[82756]: 
Jan 26 12:40:32 np0005596060 elated_jepsen[82756]: {"fsid":"d4cd1917-5876-51b6-bc64-65a16199754d","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":97,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-26T17:38:51.755695+0000","services":{}},"progress_events":{}}
Jan 26 12:40:32 np0005596060 systemd[1]: libpod-630101bb05f67b31d3c311391fff1c59a96da95aa90512aaa99556042057ea2d.scope: Deactivated successfully.
Jan 26 12:40:32 np0005596060 podman[82740]: 2026-01-26 17:40:32.547534773 +0000 UTC m=+0.932287778 container died 630101bb05f67b31d3c311391fff1c59a96da95aa90512aaa99556042057ea2d (image=quay.io/ceph/ceph:v18, name=elated_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 12:40:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e203ef02c013ef7cbe82a0637e0ca8abd8641b7eb3996cc39990831527c5ae42-merged.mount: Deactivated successfully.
Jan 26 12:40:33 np0005596060 podman[82740]: 2026-01-26 17:40:33.678831089 +0000 UTC m=+2.063584084 container remove 630101bb05f67b31d3c311391fff1c59a96da95aa90512aaa99556042057ea2d (image=quay.io/ceph/ceph:v18, name=elated_jepsen, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 12:40:33 np0005596060 systemd[1]: libpod-conmon-630101bb05f67b31d3c311391fff1c59a96da95aa90512aaa99556042057ea2d.scope: Deactivated successfully.
Jan 26 12:40:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:40:43
Jan 26 12:40:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:40:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:40:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] No pools available
Jan 26 12:40:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:40:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:40:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:40:47 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 26 12:40:47 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 26 12:40:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 12:40:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:40:48 np0005596060 ceph-mon[74267]: Updating compute-1:/etc/ceph/ceph.conf
Jan 26 12:40:48 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:40:48 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:40:49 np0005596060 ceph-mon[74267]: Updating compute-1:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:40:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:50 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 12:40:50 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 12:40:50 np0005596060 ceph-mon[74267]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 26 12:40:51 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.client.admin.keyring
Jan 26 12:40:51 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.client.admin.keyring
Jan 26 12:40:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: Updating compute-1:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.client.admin.keyring
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:53 np0005596060 ceph-mgr[74563]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 26 12:40:53 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 26 12:40:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:53 np0005596060 ceph-mgr[74563]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 26 12:40:53 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 26 12:40:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:53 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev c61709dc-7ced-4402-a007-3fcf65f87ddb (Updating crash deployment (+1 -> 2))
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:40:53.062+0000 7f4ed13ec640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: service_name: mon
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: placement:
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]:  hosts:
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]:  - compute-0
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]:  - compute-1
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]:  - compute-2
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T17:40:53.063+0000 7f4ed13ec640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: service_name: mgr
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: placement:
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]:  hosts:
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]:  - compute-0
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]:  - compute-1
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]:  - compute-2
Jan 26 12:40:53 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:53 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 26 12:40:53 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 26 12:40:54 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 26 12:40:54 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:54 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:54 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:54 np0005596060 ceph-mon[74267]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 26 12:40:54 np0005596060 ceph-mon[74267]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 26 12:40:54 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 12:40:54 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 12:40:54 np0005596060 ceph-mon[74267]: Deploying daemon crash.compute-1 on compute-1
Jan 26 12:40:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:40:55 np0005596060 ceph-mon[74267]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 26 12:40:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:40:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:40:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:56 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev c61709dc-7ced-4402-a007-3fcf65f87ddb (Updating crash deployment (+1 -> 2))
Jan 26 12:40:56 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event c61709dc-7ced-4402-a007-3fcf65f87ddb (Updating crash deployment (+1 -> 2)) in 3 seconds
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:40:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:40:56 np0005596060 podman[82932]: 2026-01-26 17:40:56.659363894 +0000 UTC m=+0.055831553 container create ccfa2fc472d313b9397ccd4a6d98924046a0313f5bc726c185ff4cf14592cd95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_boyd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:40:56 np0005596060 systemd[1]: Started libpod-conmon-ccfa2fc472d313b9397ccd4a6d98924046a0313f5bc726c185ff4cf14592cd95.scope.
Jan 26 12:40:56 np0005596060 podman[82932]: 2026-01-26 17:40:56.63132 +0000 UTC m=+0.027787679 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:40:56 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:57 np0005596060 podman[82932]: 2026-01-26 17:40:57.031296267 +0000 UTC m=+0.427763986 container init ccfa2fc472d313b9397ccd4a6d98924046a0313f5bc726c185ff4cf14592cd95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 12:40:57 np0005596060 podman[82932]: 2026-01-26 17:40:57.038842019 +0000 UTC m=+0.435309688 container start ccfa2fc472d313b9397ccd4a6d98924046a0313f5bc726c185ff4cf14592cd95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_boyd, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:40:57 np0005596060 stupefied_boyd[82949]: 167 167
Jan 26 12:40:57 np0005596060 systemd[1]: libpod-ccfa2fc472d313b9397ccd4a6d98924046a0313f5bc726c185ff4cf14592cd95.scope: Deactivated successfully.
Jan 26 12:40:57 np0005596060 podman[82932]: 2026-01-26 17:40:57.063933078 +0000 UTC m=+0.460400777 container attach ccfa2fc472d313b9397ccd4a6d98924046a0313f5bc726c185ff4cf14592cd95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_boyd, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 12:40:57 np0005596060 podman[82932]: 2026-01-26 17:40:57.064688168 +0000 UTC m=+0.461155827 container died ccfa2fc472d313b9397ccd4a6d98924046a0313f5bc726c185ff4cf14592cd95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:40:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:57 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7d4fb880d5ba26562f434d033e701e2161069388726b38d9dfc852d3ddea05c8-merged.mount: Deactivated successfully.
Jan 26 12:40:57 np0005596060 podman[82932]: 2026-01-26 17:40:57.293110146 +0000 UTC m=+0.689577845 container remove ccfa2fc472d313b9397ccd4a6d98924046a0313f5bc726c185ff4cf14592cd95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:40:57 np0005596060 systemd[1]: libpod-conmon-ccfa2fc472d313b9397ccd4a6d98924046a0313f5bc726c185ff4cf14592cd95.scope: Deactivated successfully.
Jan 26 12:40:57 np0005596060 podman[82973]: 2026-01-26 17:40:57.508277697 +0000 UTC m=+0.061715033 container create cc6175da25c2ce8a4d6d3df910268af6345a5b3398cca9d4ba09b41ac0cdfa50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:40:57 np0005596060 systemd[1]: Started libpod-conmon-cc6175da25c2ce8a4d6d3df910268af6345a5b3398cca9d4ba09b41ac0cdfa50.scope.
Jan 26 12:40:57 np0005596060 podman[82973]: 2026-01-26 17:40:57.478450757 +0000 UTC m=+0.031888133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:40:57 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:40:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add12a01376d2d5d369b0d9d969d23123206fb18c99f125a458c53e16097d91f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add12a01376d2d5d369b0d9d969d23123206fb18c99f125a458c53e16097d91f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add12a01376d2d5d369b0d9d969d23123206fb18c99f125a458c53e16097d91f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add12a01376d2d5d369b0d9d969d23123206fb18c99f125a458c53e16097d91f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/add12a01376d2d5d369b0d9d969d23123206fb18c99f125a458c53e16097d91f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:40:57 np0005596060 podman[82973]: 2026-01-26 17:40:57.607212217 +0000 UTC m=+0.160649643 container init cc6175da25c2ce8a4d6d3df910268af6345a5b3398cca9d4ba09b41ac0cdfa50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:40:57 np0005596060 podman[82973]: 2026-01-26 17:40:57.619649094 +0000 UTC m=+0.173086460 container start cc6175da25c2ce8a4d6d3df910268af6345a5b3398cca9d4ba09b41ac0cdfa50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:40:57 np0005596060 podman[82973]: 2026-01-26 17:40:57.624914848 +0000 UTC m=+0.178352214 container attach cc6175da25c2ce8a4d6d3df910268af6345a5b3398cca9d4ba09b41ac0cdfa50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:40:58 np0005596060 busy_dijkstra[82989]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:40:58 np0005596060 busy_dijkstra[82989]: --> relative data size: 1.0
Jan 26 12:40:58 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 12:40:58 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2192cb4e-a674-4139-ac32-841945fb067d
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "1c0724d8-8dd5-4d9e-bd3b-d97668d7fea7"} v 0) v1
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3579938651' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1c0724d8-8dd5-4d9e-bd3b-d97668d7fea7"}]: dispatch
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3579938651' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1c0724d8-8dd5-4d9e-bd3b-d97668d7fea7"}]': finished
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:40:58 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "2192cb4e-a674-4139-ac32-841945fb067d"} v 0) v1
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1216951465' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2192cb4e-a674-4139-ac32-841945fb067d"}]: dispatch
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1216951465' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2192cb4e-a674-4139-ac32-841945fb067d"}]': finished
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:40:58 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:40:58 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:40:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:40:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 26 12:40:59 np0005596060 lvm[83037]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 12:40:59 np0005596060 lvm[83037]: VG ceph_vg0 finished
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 26 12:40:59 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 2 completed events
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2497974338' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.101:0/3579938651' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1c0724d8-8dd5-4d9e-bd3b-d97668d7fea7"}]: dispatch
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.101:0/3579938651' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1c0724d8-8dd5-4d9e-bd3b-d97668d7fea7"}]': finished
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1216951465' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "2192cb4e-a674-4139-ac32-841945fb067d"}]: dispatch
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1216951465' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "2192cb4e-a674-4139-ac32-841945fb067d"}]': finished
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318154196' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: stderr: got monmap epoch 1
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: --> Creating keyring file for osd.1
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 26 12:40:59 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 2192cb4e-a674-4139-ac32-841945fb067d --setuser ceph --setgroup ceph
Jan 26 12:40:59 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 26 12:41:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:41:00 np0005596060 ceph-mon[74267]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 26 12:41:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: stderr: 2026-01-26T17:40:59.724+0000 7f5dbfec4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: stderr: 2026-01-26T17:40:59.724+0000 7f5dbfec4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: stderr: 2026-01-26T17:40:59.724+0000 7f5dbfec4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: stderr: 2026-01-26T17:40:59.724+0000 7f5dbfec4740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 26 12:41:02 np0005596060 busy_dijkstra[82989]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 26 12:41:02 np0005596060 systemd[1]: libpod-cc6175da25c2ce8a4d6d3df910268af6345a5b3398cca9d4ba09b41ac0cdfa50.scope: Deactivated successfully.
Jan 26 12:41:02 np0005596060 systemd[1]: libpod-cc6175da25c2ce8a4d6d3df910268af6345a5b3398cca9d4ba09b41ac0cdfa50.scope: Consumed 2.579s CPU time.
Jan 26 12:41:02 np0005596060 podman[83949]: 2026-01-26 17:41:02.742460391 +0000 UTC m=+0.021887208 container died cc6175da25c2ce8a4d6d3df910268af6345a5b3398cca9d4ba09b41ac0cdfa50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 12:41:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-add12a01376d2d5d369b0d9d969d23123206fb18c99f125a458c53e16097d91f-merged.mount: Deactivated successfully.
Jan 26 12:41:02 np0005596060 podman[83949]: 2026-01-26 17:41:02.792265 +0000 UTC m=+0.071691807 container remove cc6175da25c2ce8a4d6d3df910268af6345a5b3398cca9d4ba09b41ac0cdfa50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:41:02 np0005596060 systemd[1]: libpod-conmon-cc6175da25c2ce8a4d6d3df910268af6345a5b3398cca9d4ba09b41ac0cdfa50.scope: Deactivated successfully.
Jan 26 12:41:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:03 np0005596060 podman[84100]: 2026-01-26 17:41:03.465349835 +0000 UTC m=+0.039125638 container create 40e85cd2427658136e854b0eb697d04834c29c655deb454ee9ac8786a7cb9448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 12:41:03 np0005596060 systemd[1]: Started libpod-conmon-40e85cd2427658136e854b0eb697d04834c29c655deb454ee9ac8786a7cb9448.scope.
Jan 26 12:41:03 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:03 np0005596060 podman[84100]: 2026-01-26 17:41:03.527060536 +0000 UTC m=+0.100836359 container init 40e85cd2427658136e854b0eb697d04834c29c655deb454ee9ac8786a7cb9448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 12:41:03 np0005596060 podman[84100]: 2026-01-26 17:41:03.533637934 +0000 UTC m=+0.107413737 container start 40e85cd2427658136e854b0eb697d04834c29c655deb454ee9ac8786a7cb9448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:41:03 np0005596060 podman[84100]: 2026-01-26 17:41:03.538094497 +0000 UTC m=+0.111870300 container attach 40e85cd2427658136e854b0eb697d04834c29c655deb454ee9ac8786a7cb9448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 12:41:03 np0005596060 musing_bhabha[84117]: 167 167
Jan 26 12:41:03 np0005596060 systemd[1]: libpod-40e85cd2427658136e854b0eb697d04834c29c655deb454ee9ac8786a7cb9448.scope: Deactivated successfully.
Jan 26 12:41:03 np0005596060 podman[84100]: 2026-01-26 17:41:03.541231367 +0000 UTC m=+0.115007210 container died 40e85cd2427658136e854b0eb697d04834c29c655deb454ee9ac8786a7cb9448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:41:03 np0005596060 podman[84100]: 2026-01-26 17:41:03.449193203 +0000 UTC m=+0.022969016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:41:03 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f3a61e8df47b01412af40778de0b26a5de1e448b770364cbca065f4d8f67491b-merged.mount: Deactivated successfully.
Jan 26 12:41:03 np0005596060 podman[84100]: 2026-01-26 17:41:03.58372224 +0000 UTC m=+0.157498033 container remove 40e85cd2427658136e854b0eb697d04834c29c655deb454ee9ac8786a7cb9448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 12:41:03 np0005596060 systemd[1]: libpod-conmon-40e85cd2427658136e854b0eb697d04834c29c655deb454ee9ac8786a7cb9448.scope: Deactivated successfully.
Jan 26 12:41:03 np0005596060 podman[84141]: 2026-01-26 17:41:03.72151406 +0000 UTC m=+0.039207750 container create 5a886a65fe4e5d7d62ff29992178c7bf4a308eba0ed19d7eaf31cc3376c234b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 12:41:03 np0005596060 systemd[1]: Started libpod-conmon-5a886a65fe4e5d7d62ff29992178c7bf4a308eba0ed19d7eaf31cc3376c234b2.scope.
Jan 26 12:41:03 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7213a018f0ef19ef94602fc1852fd0533de25e3046a8e82fae1d590cca66b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7213a018f0ef19ef94602fc1852fd0533de25e3046a8e82fae1d590cca66b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7213a018f0ef19ef94602fc1852fd0533de25e3046a8e82fae1d590cca66b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62d7213a018f0ef19ef94602fc1852fd0533de25e3046a8e82fae1d590cca66b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:03 np0005596060 podman[84141]: 2026-01-26 17:41:03.789346917 +0000 UTC m=+0.107040607 container init 5a886a65fe4e5d7d62ff29992178c7bf4a308eba0ed19d7eaf31cc3376c234b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 12:41:03 np0005596060 podman[84141]: 2026-01-26 17:41:03.795262038 +0000 UTC m=+0.112955728 container start 5a886a65fe4e5d7d62ff29992178c7bf4a308eba0ed19d7eaf31cc3376c234b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 12:41:03 np0005596060 podman[84141]: 2026-01-26 17:41:03.799017914 +0000 UTC m=+0.116711604 container attach 5a886a65fe4e5d7d62ff29992178c7bf4a308eba0ed19d7eaf31cc3376c234b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_vaughan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 12:41:03 np0005596060 podman[84141]: 2026-01-26 17:41:03.704284481 +0000 UTC m=+0.021978201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:41:03 np0005596060 python3[84187]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:41:04 np0005596060 podman[84189]: 2026-01-26 17:41:04.082831093 +0000 UTC m=+0.063394216 container create 1a9e2e1e628ba5af116d971b5f2a5a725ed23f5d29ec4ba06d66def3d925ffd2 (image=quay.io/ceph/ceph:v18, name=friendly_bassi, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:04 np0005596060 systemd[1]: Started libpod-conmon-1a9e2e1e628ba5af116d971b5f2a5a725ed23f5d29ec4ba06d66def3d925ffd2.scope.
Jan 26 12:41:04 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:04 np0005596060 podman[84189]: 2026-01-26 17:41:04.054815659 +0000 UTC m=+0.035378842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:41:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7812987f1cb78c5cbe5808139bdcce023419648cc50d8ee5b3b90234014e15d6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7812987f1cb78c5cbe5808139bdcce023419648cc50d8ee5b3b90234014e15d6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7812987f1cb78c5cbe5808139bdcce023419648cc50d8ee5b3b90234014e15d6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:04 np0005596060 podman[84189]: 2026-01-26 17:41:04.167381686 +0000 UTC m=+0.147944809 container init 1a9e2e1e628ba5af116d971b5f2a5a725ed23f5d29ec4ba06d66def3d925ffd2 (image=quay.io/ceph/ceph:v18, name=friendly_bassi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:41:04 np0005596060 podman[84189]: 2026-01-26 17:41:04.174365174 +0000 UTC m=+0.154928277 container start 1a9e2e1e628ba5af116d971b5f2a5a725ed23f5d29ec4ba06d66def3d925ffd2 (image=quay.io/ceph/ceph:v18, name=friendly_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 26 12:41:04 np0005596060 podman[84189]: 2026-01-26 17:41:04.177262567 +0000 UTC m=+0.157825670 container attach 1a9e2e1e628ba5af116d971b5f2a5a725ed23f5d29ec4ba06d66def3d925ffd2 (image=quay.io/ceph/ceph:v18, name=friendly_bassi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]: {
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:    "1": [
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:        {
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "devices": [
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "/dev/loop3"
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            ],
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "lv_name": "ceph_lv0",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "lv_size": "7511998464",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "name": "ceph_lv0",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "tags": {
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.cluster_name": "ceph",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.crush_device_class": "",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.encrypted": "0",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.osd_id": "1",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.type": "block",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:                "ceph.vdo": "0"
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            },
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "type": "block",
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:            "vg_name": "ceph_vg0"
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:        }
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]:    ]
Jan 26 12:41:04 np0005596060 cool_vaughan[84157]: }
Jan 26 12:41:04 np0005596060 systemd[1]: libpod-5a886a65fe4e5d7d62ff29992178c7bf4a308eba0ed19d7eaf31cc3376c234b2.scope: Deactivated successfully.
Jan 26 12:41:04 np0005596060 podman[84141]: 2026-01-26 17:41:04.613895199 +0000 UTC m=+0.931588889 container died 5a886a65fe4e5d7d62ff29992178c7bf4a308eba0ed19d7eaf31cc3376c234b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_vaughan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 12:41:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-62d7213a018f0ef19ef94602fc1852fd0533de25e3046a8e82fae1d590cca66b-merged.mount: Deactivated successfully.
Jan 26 12:41:04 np0005596060 podman[84141]: 2026-01-26 17:41:04.66770519 +0000 UTC m=+0.985398870 container remove 5a886a65fe4e5d7d62ff29992178c7bf4a308eba0ed19d7eaf31cc3376c234b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:41:04 np0005596060 systemd[1]: libpod-conmon-5a886a65fe4e5d7d62ff29992178c7bf4a308eba0ed19d7eaf31cc3376c234b2.scope: Deactivated successfully.
Jan 26 12:41:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 26 12:41:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 26 12:41:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:41:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:41:04 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 26 12:41:04 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 26 12:41:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 26 12:41:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257662221' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 12:41:04 np0005596060 friendly_bassi[84206]: 
Jan 26 12:41:04 np0005596060 friendly_bassi[84206]: {"fsid":"d4cd1917-5876-51b6-bc64-65a16199754d","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":130,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1769449258,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-26T17:40:45.961321+0000","services":{}},"progress_events":{}}
Jan 26 12:41:04 np0005596060 systemd[1]: libpod-1a9e2e1e628ba5af116d971b5f2a5a725ed23f5d29ec4ba06d66def3d925ffd2.scope: Deactivated successfully.
Jan 26 12:41:04 np0005596060 podman[84189]: 2026-01-26 17:41:04.805160791 +0000 UTC m=+0.785723904 container died 1a9e2e1e628ba5af116d971b5f2a5a725ed23f5d29ec4ba06d66def3d925ffd2 (image=quay.io/ceph/ceph:v18, name=friendly_bassi, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:41:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7812987f1cb78c5cbe5808139bdcce023419648cc50d8ee5b3b90234014e15d6-merged.mount: Deactivated successfully.
Jan 26 12:41:04 np0005596060 podman[84189]: 2026-01-26 17:41:04.851886962 +0000 UTC m=+0.832450065 container remove 1a9e2e1e628ba5af116d971b5f2a5a725ed23f5d29ec4ba06d66def3d925ffd2 (image=quay.io/ceph/ceph:v18, name=friendly_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:41:04 np0005596060 systemd[1]: libpod-conmon-1a9e2e1e628ba5af116d971b5f2a5a725ed23f5d29ec4ba06d66def3d925ffd2.scope: Deactivated successfully.
Jan 26 12:41:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:41:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 26 12:41:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 26 12:41:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 26 12:41:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:41:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:41:05 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Jan 26 12:41:05 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Jan 26 12:41:05 np0005596060 podman[84398]: 2026-01-26 17:41:05.325315231 +0000 UTC m=+0.048644050 container create 5b3b8bc99e8b1d9704bbceb2bcf3825c614b0a1af6d513097df5404dae1d884a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:41:05 np0005596060 systemd[1]: Started libpod-conmon-5b3b8bc99e8b1d9704bbceb2bcf3825c614b0a1af6d513097df5404dae1d884a.scope.
Jan 26 12:41:05 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:05 np0005596060 podman[84398]: 2026-01-26 17:41:05.298891198 +0000 UTC m=+0.022220067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:41:05 np0005596060 podman[84398]: 2026-01-26 17:41:05.427226037 +0000 UTC m=+0.150554896 container init 5b3b8bc99e8b1d9704bbceb2bcf3825c614b0a1af6d513097df5404dae1d884a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 12:41:05 np0005596060 podman[84398]: 2026-01-26 17:41:05.433676141 +0000 UTC m=+0.157004920 container start 5b3b8bc99e8b1d9704bbceb2bcf3825c614b0a1af6d513097df5404dae1d884a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 12:41:05 np0005596060 condescending_lalande[84413]: 167 167
Jan 26 12:41:05 np0005596060 systemd[1]: libpod-5b3b8bc99e8b1d9704bbceb2bcf3825c614b0a1af6d513097df5404dae1d884a.scope: Deactivated successfully.
Jan 26 12:41:05 np0005596060 podman[84398]: 2026-01-26 17:41:05.516982503 +0000 UTC m=+0.240311322 container attach 5b3b8bc99e8b1d9704bbceb2bcf3825c614b0a1af6d513097df5404dae1d884a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 12:41:05 np0005596060 podman[84398]: 2026-01-26 17:41:05.517729962 +0000 UTC m=+0.241058791 container died 5b3b8bc99e8b1d9704bbceb2bcf3825c614b0a1af6d513097df5404dae1d884a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 12:41:05 np0005596060 systemd[1]: var-lib-containers-storage-overlay-9f07bbc2c40b10c8de58337a9405b0a0914abe70b2b4e79164b190290dd5f93b-merged.mount: Deactivated successfully.
Jan 26 12:41:05 np0005596060 podman[84398]: 2026-01-26 17:41:05.820120975 +0000 UTC m=+0.543449754 container remove 5b3b8bc99e8b1d9704bbceb2bcf3825c614b0a1af6d513097df5404dae1d884a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 12:41:05 np0005596060 systemd[1]: libpod-conmon-5b3b8bc99e8b1d9704bbceb2bcf3825c614b0a1af6d513097df5404dae1d884a.scope: Deactivated successfully.
Jan 26 12:41:06 np0005596060 podman[84445]: 2026-01-26 17:41:06.110584383 +0000 UTC m=+0.057023963 container create bbc1735be537022e9b0df925ed0ef9ebedbdbaebff5f020be8a8bd2616358a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:41:06 np0005596060 ceph-mon[74267]: Deploying daemon osd.1 on compute-0
Jan 26 12:41:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 26 12:41:06 np0005596060 ceph-mon[74267]: Deploying daemon osd.0 on compute-1
Jan 26 12:41:06 np0005596060 systemd[1]: Started libpod-conmon-bbc1735be537022e9b0df925ed0ef9ebedbdbaebff5f020be8a8bd2616358a57.scope.
Jan 26 12:41:06 np0005596060 podman[84445]: 2026-01-26 17:41:06.083401541 +0000 UTC m=+0.029841191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:41:06 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740f629bbf88bcbc6493ba7a56a607f267c42913a8828fbde97a2f1c677ee8c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740f629bbf88bcbc6493ba7a56a607f267c42913a8828fbde97a2f1c677ee8c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740f629bbf88bcbc6493ba7a56a607f267c42913a8828fbde97a2f1c677ee8c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740f629bbf88bcbc6493ba7a56a607f267c42913a8828fbde97a2f1c677ee8c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740f629bbf88bcbc6493ba7a56a607f267c42913a8828fbde97a2f1c677ee8c5/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:06 np0005596060 podman[84445]: 2026-01-26 17:41:06.19840167 +0000 UTC m=+0.144841250 container init bbc1735be537022e9b0df925ed0ef9ebedbdbaebff5f020be8a8bd2616358a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate-test, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 12:41:06 np0005596060 podman[84445]: 2026-01-26 17:41:06.2054656 +0000 UTC m=+0.151905180 container start bbc1735be537022e9b0df925ed0ef9ebedbdbaebff5f020be8a8bd2616358a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate-test, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:06 np0005596060 podman[84445]: 2026-01-26 17:41:06.209620906 +0000 UTC m=+0.156060506 container attach bbc1735be537022e9b0df925ed0ef9ebedbdbaebff5f020be8a8bd2616358a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate-test, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:06 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate-test[84461]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 26 12:41:06 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate-test[84461]:                            [--no-systemd] [--no-tmpfs]
Jan 26 12:41:06 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate-test[84461]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 26 12:41:06 np0005596060 systemd[1]: libpod-bbc1735be537022e9b0df925ed0ef9ebedbdbaebff5f020be8a8bd2616358a57.scope: Deactivated successfully.
Jan 26 12:41:06 np0005596060 podman[84445]: 2026-01-26 17:41:06.878588016 +0000 UTC m=+0.825027576 container died bbc1735be537022e9b0df925ed0ef9ebedbdbaebff5f020be8a8bd2616358a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:41:06 np0005596060 systemd[1]: var-lib-containers-storage-overlay-740f629bbf88bcbc6493ba7a56a607f267c42913a8828fbde97a2f1c677ee8c5-merged.mount: Deactivated successfully.
Jan 26 12:41:06 np0005596060 podman[84445]: 2026-01-26 17:41:06.941819586 +0000 UTC m=+0.888259146 container remove bbc1735be537022e9b0df925ed0ef9ebedbdbaebff5f020be8a8bd2616358a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate-test, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 12:41:06 np0005596060 systemd[1]: libpod-conmon-bbc1735be537022e9b0df925ed0ef9ebedbdbaebff5f020be8a8bd2616358a57.scope: Deactivated successfully.
Jan 26 12:41:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:07 np0005596060 systemd[1]: Reloading.
Jan 26 12:41:07 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:41:07 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:41:07 np0005596060 systemd[1]: Reloading.
Jan 26 12:41:07 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:41:07 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:41:07 np0005596060 systemd[1]: Starting Ceph osd.1 for d4cd1917-5876-51b6-bc64-65a16199754d...
Jan 26 12:41:08 np0005596060 podman[84620]: 2026-01-26 17:41:08.030594449 +0000 UTC m=+0.052130309 container create 3d58aa547a3bb1401c4ab54f0e0838e8404e91af9a822a5bac61955f15b72ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:41:08 np0005596060 podman[84620]: 2026-01-26 17:41:08.005082549 +0000 UTC m=+0.026618399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:41:08 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1ec59f9811067eb128e56b1829250b4ebf5bdfcaf584e4eb916c754c3da54e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1ec59f9811067eb128e56b1829250b4ebf5bdfcaf584e4eb916c754c3da54e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1ec59f9811067eb128e56b1829250b4ebf5bdfcaf584e4eb916c754c3da54e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1ec59f9811067eb128e56b1829250b4ebf5bdfcaf584e4eb916c754c3da54e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc1ec59f9811067eb128e56b1829250b4ebf5bdfcaf584e4eb916c754c3da54e/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:08 np0005596060 podman[84620]: 2026-01-26 17:41:08.167227849 +0000 UTC m=+0.188763689 container init 3d58aa547a3bb1401c4ab54f0e0838e8404e91af9a822a5bac61955f15b72ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 12:41:08 np0005596060 podman[84620]: 2026-01-26 17:41:08.17548627 +0000 UTC m=+0.197022090 container start 3d58aa547a3bb1401c4ab54f0e0838e8404e91af9a822a5bac61955f15b72ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:41:08 np0005596060 podman[84620]: 2026-01-26 17:41:08.18845593 +0000 UTC m=+0.209991800 container attach 3d58aa547a3bb1401c4ab54f0e0838e8404e91af9a822a5bac61955f15b72ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 12:41:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:09 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate[84636]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 26 12:41:09 np0005596060 bash[84620]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 26 12:41:09 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate[84636]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 26 12:41:09 np0005596060 bash[84620]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 26 12:41:09 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate[84636]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 26 12:41:09 np0005596060 bash[84620]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 26 12:41:09 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate[84636]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 26 12:41:09 np0005596060 bash[84620]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 26 12:41:09 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate[84636]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:09 np0005596060 bash[84620]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:09 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate[84636]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 26 12:41:09 np0005596060 bash[84620]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 26 12:41:09 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate[84636]: --> ceph-volume raw activate successful for osd ID: 1
Jan 26 12:41:09 np0005596060 bash[84620]: --> ceph-volume raw activate successful for osd ID: 1
Jan 26 12:41:09 np0005596060 systemd[1]: libpod-3d58aa547a3bb1401c4ab54f0e0838e8404e91af9a822a5bac61955f15b72ea6.scope: Deactivated successfully.
Jan 26 12:41:09 np0005596060 systemd[1]: libpod-3d58aa547a3bb1401c4ab54f0e0838e8404e91af9a822a5bac61955f15b72ea6.scope: Consumed 1.032s CPU time.
Jan 26 12:41:09 np0005596060 podman[84756]: 2026-01-26 17:41:09.250854491 +0000 UTC m=+0.029301027 container died 3d58aa547a3bb1401c4ab54f0e0838e8404e91af9a822a5bac61955f15b72ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 12:41:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fc1ec59f9811067eb128e56b1829250b4ebf5bdfcaf584e4eb916c754c3da54e-merged.mount: Deactivated successfully.
Jan 26 12:41:09 np0005596060 podman[84756]: 2026-01-26 17:41:09.555030129 +0000 UTC m=+0.333476625 container remove 3d58aa547a3bb1401c4ab54f0e0838e8404e91af9a822a5bac61955f15b72ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1-activate, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 12:41:09 np0005596060 podman[84814]: 2026-01-26 17:41:09.786806903 +0000 UTC m=+0.057793803 container create bcd2c4d3bb649631f63c84aad376a34ef34318eedbf613f57bae85e668918de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:41:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5d64e07d30b64f5af6e2989d101449425e34599f89d5f934c82414c4bc6a08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5d64e07d30b64f5af6e2989d101449425e34599f89d5f934c82414c4bc6a08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5d64e07d30b64f5af6e2989d101449425e34599f89d5f934c82414c4bc6a08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5d64e07d30b64f5af6e2989d101449425e34599f89d5f934c82414c4bc6a08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f5d64e07d30b64f5af6e2989d101449425e34599f89d5f934c82414c4bc6a08/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:09 np0005596060 podman[84814]: 2026-01-26 17:41:09.751596546 +0000 UTC m=+0.022583466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:41:09 np0005596060 podman[84814]: 2026-01-26 17:41:09.850627019 +0000 UTC m=+0.121613949 container init bcd2c4d3bb649631f63c84aad376a34ef34318eedbf613f57bae85e668918de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 12:41:09 np0005596060 podman[84814]: 2026-01-26 17:41:09.858416627 +0000 UTC m=+0.129403547 container start bcd2c4d3bb649631f63c84aad376a34ef34318eedbf613f57bae85e668918de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:41:09 np0005596060 bash[84814]: bcd2c4d3bb649631f63c84aad376a34ef34318eedbf613f57bae85e668918de8
Jan 26 12:41:09 np0005596060 systemd[1]: Started Ceph osd.1 for d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: pidfile_write: ignore empty --pid-file
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bdev(0x556c732b9800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bdev(0x556c732b9800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bdev(0x556c732b9800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bdev(0x556c732b9800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bdev(0x556c740fb800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bdev(0x556c740fb800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bdev(0x556c740fb800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bdev(0x556c740fb800 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 26 12:41:09 np0005596060 ceph-osd[84834]: bdev(0x556c740fb800 /var/lib/ceph/osd/ceph-1/block) close
Jan 26 12:41:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:41:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:41:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c732b9800 /var/lib/ceph/osd/ceph-1/block) close
Jan 26 12:41:10 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:10 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: load: jerasure load: lrc 
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 26 12:41:10 np0005596060 podman[84993]: 2026-01-26 17:41:10.541705632 +0000 UTC m=+0.040499413 container create 37e001591692b28cb8abc458b2d0d76de30b6fc7d0bd3017604e01fc8ae6b933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:41:10 np0005596060 systemd[1]: Started libpod-conmon-37e001591692b28cb8abc458b2d0d76de30b6fc7d0bd3017604e01fc8ae6b933.scope.
Jan 26 12:41:10 np0005596060 podman[84993]: 2026-01-26 17:41:10.524492914 +0000 UTC m=+0.023286705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:41:10 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:10 np0005596060 podman[84993]: 2026-01-26 17:41:10.648546603 +0000 UTC m=+0.147340404 container init 37e001591692b28cb8abc458b2d0d76de30b6fc7d0bd3017604e01fc8ae6b933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 26 12:41:10 np0005596060 podman[84993]: 2026-01-26 17:41:10.656380243 +0000 UTC m=+0.155174024 container start 37e001591692b28cb8abc458b2d0d76de30b6fc7d0bd3017604e01fc8ae6b933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 12:41:10 np0005596060 podman[84993]: 2026-01-26 17:41:10.659876882 +0000 UTC m=+0.158670693 container attach 37e001591692b28cb8abc458b2d0d76de30b6fc7d0bd3017604e01fc8ae6b933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 12:41:10 np0005596060 hungry_hertz[85010]: 167 167
Jan 26 12:41:10 np0005596060 systemd[1]: libpod-37e001591692b28cb8abc458b2d0d76de30b6fc7d0bd3017604e01fc8ae6b933.scope: Deactivated successfully.
Jan 26 12:41:10 np0005596060 podman[84993]: 2026-01-26 17:41:10.662246772 +0000 UTC m=+0.161040573 container died 37e001591692b28cb8abc458b2d0d76de30b6fc7d0bd3017604e01fc8ae6b933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:41:10 np0005596060 systemd[1]: var-lib-containers-storage-overlay-9cb86089c385c150fb035aa4cb652b6a88d2bee98b3c8650828882cc52ed7894-merged.mount: Deactivated successfully.
Jan 26 12:41:10 np0005596060 podman[84993]: 2026-01-26 17:41:10.701379669 +0000 UTC m=+0.200173450 container remove 37e001591692b28cb8abc458b2d0d76de30b6fc7d0bd3017604e01fc8ae6b933 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:41:10 np0005596060 systemd[1]: libpod-conmon-37e001591692b28cb8abc458b2d0d76de30b6fc7d0bd3017604e01fc8ae6b933.scope: Deactivated successfully.
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) close
Jan 26 12:41:10 np0005596060 podman[85038]: 2026-01-26 17:41:10.861470737 +0000 UTC m=+0.042509304 container create 5a6a423c7ef5f58b7447ff6061226b9da4d7b6b89168fbc1d79dab986034701b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 12:41:10 np0005596060 systemd[1]: Started libpod-conmon-5a6a423c7ef5f58b7447ff6061226b9da4d7b6b89168fbc1d79dab986034701b.scope.
Jan 26 12:41:10 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fe17141bda0bf445a31a2cde3ac299c21adb10a2979ae498628cfd532a6377/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fe17141bda0bf445a31a2cde3ac299c21adb10a2979ae498628cfd532a6377/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fe17141bda0bf445a31a2cde3ac299c21adb10a2979ae498628cfd532a6377/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2fe17141bda0bf445a31a2cde3ac299c21adb10a2979ae498628cfd532a6377/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:10 np0005596060 podman[85038]: 2026-01-26 17:41:10.844623398 +0000 UTC m=+0.025661985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:41:10 np0005596060 podman[85038]: 2026-01-26 17:41:10.94836304 +0000 UTC m=+0.129401627 container init 5a6a423c7ef5f58b7447ff6061226b9da4d7b6b89168fbc1d79dab986034701b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:41:10 np0005596060 podman[85038]: 2026-01-26 17:41:10.954339213 +0000 UTC m=+0.135377780 container start 5a6a423c7ef5f58b7447ff6061226b9da4d7b6b89168fbc1d79dab986034701b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:41:10 np0005596060 podman[85038]: 2026-01-26 17:41:10.957625276 +0000 UTC m=+0.138663843 container attach 5a6a423c7ef5f58b7447ff6061226b9da4d7b6b89168fbc1d79dab986034701b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417cc00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417d400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bdev(0x556c7417d400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bluefs mount
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bluefs mount shared_bdev_used = 0
Jan 26 12:41:10 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: RocksDB version: 7.9.2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Git sha 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: DB SUMMARY
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: DB Session ID:  PYWCZXUY0JBXOTLRONK7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: CURRENT file:  CURRENT
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: IDENTITY file:  IDENTITY
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                         Options.error_if_exists: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.create_if_missing: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                         Options.paranoid_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                                     Options.env: 0x556c7414dd50
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                                Options.info_log: 0x556c73341ca0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_file_opening_threads: 16
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                              Options.statistics: (nil)
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.use_fsync: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.max_log_file_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                         Options.allow_fallocate: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.use_direct_reads: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.create_missing_column_families: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                              Options.db_log_dir: 
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                                 Options.wal_dir: db.wal
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.advise_random_on_open: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.write_buffer_manager: 0x556c73374460
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                            Options.rate_limiter: (nil)
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.unordered_write: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.row_cache: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                              Options.wal_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.allow_ingest_behind: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.two_write_queues: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.manual_wal_flush: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.wal_compression: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.atomic_flush: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.log_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.allow_data_in_errors: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.db_host_id: __hostname__
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.max_background_jobs: 4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.max_background_compactions: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.max_subcompactions: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.max_open_files: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.bytes_per_sync: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.max_background_flushes: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Compression algorithms supported:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kZSTD supported: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kXpressCompression supported: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kBZip2Compression supported: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kLZ4Compression supported: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kZlibCompression supported: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kSnappyCompression supported: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c73344380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c73344380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c73344380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c73344380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c73344380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c73344380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c73344380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332cdd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c73344300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332c2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c73344300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332c2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c73344300)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332c2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 041f99d3-8545-489e-83c7-b8077a765278
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449271021406, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449271021624, "job": 1, "event": "recovery_finished"}
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: freelist init
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: freelist _read_cfg
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluefs umount
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bdev(0x556c7417d400 /var/lib/ceph/osd/ceph-1/block) close
Jan 26 12:41:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:41:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:41:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bdev(0x556c7417d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bdev(0x556c7417d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bdev(0x556c7417d400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bdev(0x556c7417d400 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluefs mount
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluefs mount shared_bdev_used = 4718592
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: RocksDB version: 7.9.2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Git sha 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: DB SUMMARY
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: DB Session ID:  PYWCZXUY0JBXOTLRONK6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: CURRENT file:  CURRENT
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: IDENTITY file:  IDENTITY
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                         Options.error_if_exists: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.create_if_missing: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                         Options.paranoid_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                                     Options.env: 0x556c740c63f0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                                Options.info_log: 0x556c73345160
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_file_opening_threads: 16
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                              Options.statistics: (nil)
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.use_fsync: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.max_log_file_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                         Options.allow_fallocate: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.use_direct_reads: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.create_missing_column_families: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                              Options.db_log_dir: 
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                                 Options.wal_dir: db.wal
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.advise_random_on_open: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.write_buffer_manager: 0x556c73374460
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                            Options.rate_limiter: (nil)
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.unordered_write: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.row_cache: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                              Options.wal_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.allow_ingest_behind: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.two_write_queues: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.manual_wal_flush: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.wal_compression: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.atomic_flush: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.log_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.allow_data_in_errors: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.db_host_id: __hostname__
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.max_background_jobs: 4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.max_background_compactions: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.max_subcompactions: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.max_open_files: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.bytes_per_sync: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.max_background_flushes: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Compression algorithms supported:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kZSTD supported: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kXpressCompression supported: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kBZip2Compression supported: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kLZ4Compression supported: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kZlibCompression supported: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: #011kSnappyCompression supported: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c733c35c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332c2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c733c35c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332c2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c733c35c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332c2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c733c35c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332c2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c733c35c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332c2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c733c35c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332c2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c733c35c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332c2d0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c733c3520)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c733c3520)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:           Options.merge_operator: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.compaction_filter_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.sst_partitioner_factory: None
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556c733c3520)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556c7332d350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.write_buffer_size: 16777216
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.max_write_buffer_number: 64
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.compression: LZ4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.num_levels: 7
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.level: 32767
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.compression_opts.strategy: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                  Options.compression_opts.enabled: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.arena_block_size: 1048576
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.disable_auto_compactions: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.inplace_update_support: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.bloom_locality: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                    Options.max_successive_merges: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.paranoid_file_checks: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.force_consistency_checks: 1
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.report_bg_io_stats: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                               Options.ttl: 2592000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                       Options.enable_blob_files: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                           Options.min_blob_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                          Options.blob_file_size: 268435456
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb:                Options.blob_file_starting_level: 0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 041f99d3-8545-489e-83c7-b8077a765278
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449271283123, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449271286993, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449271, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "041f99d3-8545-489e-83c7-b8077a765278", "db_session_id": "PYWCZXUY0JBXOTLRONK6", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449271289411, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449271, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "041f99d3-8545-489e-83c7-b8077a765278", "db_session_id": "PYWCZXUY0JBXOTLRONK6", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449271291464, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449271, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "041f99d3-8545-489e-83c7-b8077a765278", "db_session_id": "PYWCZXUY0JBXOTLRONK6", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449271292702, "job": 1, "event": "recovery_finished"}
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556c733ffc00
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: DB pointer 0x556c73361a00
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556c7332c2d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556c7332c2d0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: _get_class not permitted to load lua
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: _get_class not permitted to load sdk
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: _get_class not permitted to load test_remote_reads
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: osd.1 0 load_pgs
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: osd.1 0 load_pgs opened 0 pgs
Jan 26 12:41:11 np0005596060 ceph-osd[84834]: osd.1 0 log_to_monitors true
Jan 26 12:41:11 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1[84830]: 2026-01-26T17:41:11.317+0000 7ff36e737740 -1 osd.1 0 log_to_monitors true
Jan 26 12:41:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Jan 26 12:41:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1572696625,v1:192.168.122.100:6803/1572696625]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 26 12:41:11 np0005596060 musing_swirles[85055]: {
Jan 26 12:41:11 np0005596060 musing_swirles[85055]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:41:11 np0005596060 musing_swirles[85055]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:41:11 np0005596060 musing_swirles[85055]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:41:11 np0005596060 musing_swirles[85055]:        "osd_id": 1,
Jan 26 12:41:11 np0005596060 musing_swirles[85055]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:41:11 np0005596060 musing_swirles[85055]:        "type": "bluestore"
Jan 26 12:41:11 np0005596060 musing_swirles[85055]:    }
Jan 26 12:41:11 np0005596060 musing_swirles[85055]: }
Jan 26 12:41:11 np0005596060 systemd[1]: libpod-5a6a423c7ef5f58b7447ff6061226b9da4d7b6b89168fbc1d79dab986034701b.scope: Deactivated successfully.
Jan 26 12:41:11 np0005596060 podman[85038]: 2026-01-26 17:41:11.93651558 +0000 UTC m=+1.117554167 container died 5a6a423c7ef5f58b7447ff6061226b9da4d7b6b89168fbc1d79dab986034701b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 12:41:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a2fe17141bda0bf445a31a2cde3ac299c21adb10a2979ae498628cfd532a6377-merged.mount: Deactivated successfully.
Jan 26 12:41:11 np0005596060 podman[85038]: 2026-01-26 17:41:11.997330799 +0000 UTC m=+1.178369366 container remove 5a6a423c7ef5f58b7447ff6061226b9da4d7b6b89168fbc1d79dab986034701b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swirles, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 12:41:12 np0005596060 systemd[1]: libpod-conmon-5a6a423c7ef5f58b7447ff6061226b9da4d7b6b89168fbc1d79dab986034701b.scope: Deactivated successfully.
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: from='osd.1 [v2:192.168.122.100:6802/1572696625,v1:192.168.122.100:6803/1572696625]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1572696625,v1:192.168.122.100:6803/1572696625]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1572696625,v1:192.168.122.100:6803/1572696625]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-0,root=default}
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:12 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:41:12 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:12 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 26 12:41:12 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Jan 26 12:41:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2417575874,v1:192.168.122.101:6801/2417575874]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 26 12:41:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/1572696625,v1:192.168.122.100:6803/1572696625]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2417575874,v1:192.168.122.101:6801/2417575874]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 26 12:41:13 np0005596060 ceph-osd[84834]: osd.1 0 done with init, starting boot process
Jan 26 12:41:13 np0005596060 ceph-osd[84834]: osd.1 0 start_boot
Jan 26 12:41:13 np0005596060 ceph-osd[84834]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 26 12:41:13 np0005596060 ceph-osd[84834]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 26 12:41:13 np0005596060 ceph-osd[84834]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 26 12:41:13 np0005596060 ceph-osd[84834]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 26 12:41:13 np0005596060 ceph-osd[84834]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2417575874,v1:192.168.122.101:6801/2417575874]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-1,root=default}
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:13 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:41:13 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:13 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1572696625; not ready for session (expect reconnect)
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:13 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: from='osd.1 [v2:192.168.122.100:6802/1572696625,v1:192.168.122.100:6803/1572696625]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: from='osd.1 [v2:192.168.122.100:6802/1572696625,v1:192.168.122.100:6803/1572696625]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: from='osd.0 [v2:192.168.122.101:6800/2417575874,v1:192.168.122.101:6801/2417575874]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 26 12:41:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/2417575874,v1:192.168.122.101:6801/2417575874]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1572696625; not ready for session (expect reconnect)
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2417575874; not ready for session (expect reconnect)
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:14 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: from='osd.1 [v2:192.168.122.100:6802/1572696625,v1:192.168.122.100:6803/1572696625]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: from='osd.0 [v2:192.168.122.101:6800/2417575874,v1:192.168.122.101:6801/2417575874]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: from='osd.0 [v2:192.168.122.101:6800/2417575874,v1:192.168.122.101:6801/2417575874]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: from='osd.0 [v2:192.168.122.101:6800/2417575874,v1:192.168.122.101:6801/2417575874]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:41:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:14 np0005596060 podman[85717]: 2026-01-26 17:41:14.536085707 +0000 UTC m=+0.141143516 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 12:41:14 np0005596060 podman[85717]: 2026-01-26 17:41:14.659581172 +0000 UTC m=+0.264638941 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:41:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:15 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1572696625; not ready for session (expect reconnect)
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:41:15 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:15 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2417575874; not ready for session (expect reconnect)
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:15 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:41:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:16 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1572696625; not ready for session (expect reconnect)
Jan 26 12:41:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:16 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:16 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2417575874; not ready for session (expect reconnect)
Jan 26 12:41:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:16 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:41:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:41:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:17 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1572696625; not ready for session (expect reconnect)
Jan 26 12:41:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:17 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:17 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2417575874; not ready for session (expect reconnect)
Jan 26 12:41:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:17 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:41:17 np0005596060 podman[86069]: 2026-01-26 17:41:17.654914668 +0000 UTC m=+0.116113418 container create 5aaf3596b21c8ae835cd9f5bdd17676e3f802192ed5e48430d47f9898559c241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:17 np0005596060 podman[86069]: 2026-01-26 17:41:17.561454228 +0000 UTC m=+0.022652988 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:41:17 np0005596060 systemd[1]: Started libpod-conmon-5aaf3596b21c8ae835cd9f5bdd17676e3f802192ed5e48430d47f9898559c241.scope.
Jan 26 12:41:17 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:17 np0005596060 podman[86069]: 2026-01-26 17:41:17.92783537 +0000 UTC m=+0.389034140 container init 5aaf3596b21c8ae835cd9f5bdd17676e3f802192ed5e48430d47f9898559c241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 12:41:17 np0005596060 podman[86069]: 2026-01-26 17:41:17.938777479 +0000 UTC m=+0.399976219 container start 5aaf3596b21c8ae835cd9f5bdd17676e3f802192ed5e48430d47f9898559c241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:41:17 np0005596060 hardcore_rubin[86086]: 167 167
Jan 26 12:41:17 np0005596060 systemd[1]: libpod-5aaf3596b21c8ae835cd9f5bdd17676e3f802192ed5e48430d47f9898559c241.scope: Deactivated successfully.
Jan 26 12:41:18 np0005596060 podman[86069]: 2026-01-26 17:41:18.07307898 +0000 UTC m=+0.534277760 container attach 5aaf3596b21c8ae835cd9f5bdd17676e3f802192ed5e48430d47f9898559c241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:18 np0005596060 podman[86069]: 2026-01-26 17:41:18.074619819 +0000 UTC m=+0.535818589 container died 5aaf3596b21c8ae835cd9f5bdd17676e3f802192ed5e48430d47f9898559c241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 12:41:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-81fd6898eaef8d1f324cdc6940a2dd7ce75e66c27496f409f40c508c53c56a03-merged.mount: Deactivated successfully.
Jan 26 12:41:18 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1572696625; not ready for session (expect reconnect)
Jan 26 12:41:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:18 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:18 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2417575874; not ready for session (expect reconnect)
Jan 26 12:41:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:18 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:41:18 np0005596060 podman[86069]: 2026-01-26 17:41:18.416417954 +0000 UTC m=+0.877616724 container remove 5aaf3596b21c8ae835cd9f5bdd17676e3f802192ed5e48430d47f9898559c241 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rubin, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 12:41:18 np0005596060 systemd[1]: libpod-conmon-5aaf3596b21c8ae835cd9f5bdd17676e3f802192ed5e48430d47f9898559c241.scope: Deactivated successfully.
Jan 26 12:41:18 np0005596060 podman[86111]: 2026-01-26 17:41:18.604905635 +0000 UTC m=+0.079071335 container create 4876fc8c2a2b322d178ffae30684f0df67385dedfdcfab03d21c124e6b6aca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 12:41:18 np0005596060 podman[86111]: 2026-01-26 17:41:18.547132794 +0000 UTC m=+0.021298474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:41:18 np0005596060 systemd[1]: Started libpod-conmon-4876fc8c2a2b322d178ffae30684f0df67385dedfdcfab03d21c124e6b6aca26.scope.
Jan 26 12:41:18 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe04ea9265f6ee94773cc2bcad3fd0c92f8a70f54513924b2e0bbe43fddec96e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe04ea9265f6ee94773cc2bcad3fd0c92f8a70f54513924b2e0bbe43fddec96e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe04ea9265f6ee94773cc2bcad3fd0c92f8a70f54513924b2e0bbe43fddec96e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe04ea9265f6ee94773cc2bcad3fd0c92f8a70f54513924b2e0bbe43fddec96e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:18 np0005596060 podman[86111]: 2026-01-26 17:41:18.744114621 +0000 UTC m=+0.218280291 container init 4876fc8c2a2b322d178ffae30684f0df67385dedfdcfab03d21c124e6b6aca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:41:18 np0005596060 podman[86111]: 2026-01-26 17:41:18.758654672 +0000 UTC m=+0.232820342 container start 4876fc8c2a2b322d178ffae30684f0df67385dedfdcfab03d21c124e6b6aca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:41:18 np0005596060 podman[86111]: 2026-01-26 17:41:18.784917561 +0000 UTC m=+0.259083221 container attach 4876fc8c2a2b322d178ffae30684f0df67385dedfdcfab03d21c124e6b6aca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 12:41:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:19 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1572696625; not ready for session (expect reconnect)
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:19 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:19 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2417575874; not ready for session (expect reconnect)
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:19 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 26 12:41:19 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 26 12:41:19 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 26 12:41:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]: [
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:    {
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:        "available": false,
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:        "ceph_device": false,
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:        "lsm_data": {},
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:        "lvs": [],
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:        "path": "/dev/sr0",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:        "rejected_reasons": [
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "Has a FileSystem",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "Insufficient space (<5GB)"
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:        ],
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:        "sys_api": {
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "actuators": null,
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "device_nodes": "sr0",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "devname": "sr0",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "human_readable_size": "482.00 KB",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "id_bus": "ata",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "model": "QEMU DVD-ROM",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "nr_requests": "2",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "parent": "/dev/sr0",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "partitions": {},
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "path": "/dev/sr0",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "removable": "1",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "rev": "2.5+",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "ro": "0",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "rotational": "1",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "sas_address": "",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "sas_device_handle": "",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "scheduler_mode": "mq-deadline",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "sectors": 0,
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "sectorsize": "2048",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "size": 493568.0,
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "support_discard": "2048",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "type": "disk",
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:            "vendor": "QEMU"
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:        }
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]:    }
Jan 26 12:41:20 np0005596060 vigilant_albattani[86127]: ]
Jan 26 12:41:20 np0005596060 systemd[1]: libpod-4876fc8c2a2b322d178ffae30684f0df67385dedfdcfab03d21c124e6b6aca26.scope: Deactivated successfully.
Jan 26 12:41:20 np0005596060 systemd[1]: libpod-4876fc8c2a2b322d178ffae30684f0df67385dedfdcfab03d21c124e6b6aca26.scope: Consumed 1.286s CPU time.
Jan 26 12:41:20 np0005596060 podman[86111]: 2026-01-26 17:41:20.042307769 +0000 UTC m=+1.516473439 container died 4876fc8c2a2b322d178ffae30684f0df67385dedfdcfab03d21c124e6b6aca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 12:41:20 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fe04ea9265f6ee94773cc2bcad3fd0c92f8a70f54513924b2e0bbe43fddec96e-merged.mount: Deactivated successfully.
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:41:20 np0005596060 podman[86111]: 2026-01-26 17:41:20.183101125 +0000 UTC m=+1.657266815 container remove 4876fc8c2a2b322d178ffae30684f0df67385dedfdcfab03d21c124e6b6aca26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:41:20 np0005596060 systemd[1]: libpod-conmon-4876fc8c2a2b322d178ffae30684f0df67385dedfdcfab03d21c124e6b6aca26.scope: Deactivated successfully.
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:20 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1572696625; not ready for session (expect reconnect)
Jan 26 12:41:20 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/2417575874; not ready for session (expect reconnect)
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:20 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:20 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 26 12:41:20 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 26 12:41:20 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 26 12:41:20 np0005596060 ceph-mgr[74563]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 26 12:41:20 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/2417575874,v1:192.168.122.101:6801/2417575874] boot
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:20 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 26 12:41:21 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/1572696625; not ready for session (expect reconnect)
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:21 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: OSD bench result of 3533.538905 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: osd.0 [v2:192.168.122.101:6800/2417575874,v1:192.168.122.101:6801/2417575874] boot
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 16.736 iops: 4284.446 elapsed_sec: 0.700
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: log_channel(cluster) log [WRN] : OSD bench result of 4284.446175 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: osd.1 0 waiting for initial osdmap
Jan 26 12:41:21 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1[84830]: 2026-01-26T17:41:21.367+0000 7ff36a6b7640 -1 osd.1 0 waiting for initial osdmap
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: osd.1 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: osd.1 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: osd.1 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: osd.1 9 check_osdmap_features require_osd_release unknown -> reef
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: osd.1 9 set_numa_affinity not setting numa affinity
Jan 26 12:41:21 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-osd-1[84830]: 2026-01-26T17:41:21.390+0000 7ff365cdf640 -1 osd.1 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: osd.1 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/1572696625,v1:192.168.122.100:6803/1572696625] boot
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 26 12:41:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 26 12:41:21 np0005596060 ceph-osd[84834]: osd.1 10 state: booting -> active
Jan 26 12:41:22 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] creating mgr pool
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: OSD bench result of 4284.446175 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: osd.1 [v2:192.168.122.100:6802/1572696625,v1:192.168.122.100:6803/1572696625] boot
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Jan 26 12:41:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 26 12:41:22 np0005596060 ceph-osd[84834]: osd.1 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 26 12:41:22 np0005596060 ceph-osd[84834]: osd.1 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 26 12:41:22 np0005596060 ceph-osd[84834]: osd.1 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 26 12:41:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 unknown; 0 B data, 852 MiB used, 13 GiB / 14 GiB avail
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 26 12:41:23 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] creating main.db for devicehealth
Jan 26 12:41:23 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] Check health
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 26 12:41:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 12:41:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 26 12:41:24 np0005596060 ceph-mon[74267]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:41:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 26 12:41:24 np0005596060 ceph-mon[74267]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 26 12:41:24 np0005596060 ceph-mon[74267]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 26 12:41:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 26 12:41:24 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 26 12:41:24 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mbryrf(active, since 100s)
Jan 26 12:41:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 unknown; 0 B data, 852 MiB used, 13 GiB / 14 GiB avail
Jan 26 12:41:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:41:25 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 26 12:41:26 np0005596060 ceph-mon[74267]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 26 12:41:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 26 12:41:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 26 12:41:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:41:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 26 12:41:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 26 12:41:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 26 12:41:35 np0005596060 python3[87348]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:41:35 np0005596060 podman[87350]: 2026-01-26 17:41:35.198320461 +0000 UTC m=+0.022222860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:41:35 np0005596060 podman[87350]: 2026-01-26 17:41:35.351563828 +0000 UTC m=+0.175466207 container create f92803ddbe4b74fc6904f562b354340207be2985a56c94493bad31ac32defaee (image=quay.io/ceph/ceph:v18, name=jovial_mayer, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 26 12:41:35 np0005596060 systemd[1]: Started libpod-conmon-f92803ddbe4b74fc6904f562b354340207be2985a56c94493bad31ac32defaee.scope.
Jan 26 12:41:35 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16596e378838fa34be3121960c1039b46b2b622e9bb10b98743a1f88e7195f2f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16596e378838fa34be3121960c1039b46b2b622e9bb10b98743a1f88e7195f2f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16596e378838fa34be3121960c1039b46b2b622e9bb10b98743a1f88e7195f2f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:35 np0005596060 podman[87350]: 2026-01-26 17:41:35.50578665 +0000 UTC m=+0.329689049 container init f92803ddbe4b74fc6904f562b354340207be2985a56c94493bad31ac32defaee (image=quay.io/ceph/ceph:v18, name=jovial_mayer, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:41:35 np0005596060 podman[87350]: 2026-01-26 17:41:35.512429364 +0000 UTC m=+0.336331743 container start f92803ddbe4b74fc6904f562b354340207be2985a56c94493bad31ac32defaee (image=quay.io/ceph/ceph:v18, name=jovial_mayer, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:35 np0005596060 podman[87350]: 2026-01-26 17:41:35.516060944 +0000 UTC m=+0.339963333 container attach f92803ddbe4b74fc6904f562b354340207be2985a56c94493bad31ac32defaee (image=quay.io/ceph/ceph:v18, name=jovial_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:41:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:41:35 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 26 12:41:35 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 26 12:41:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 26 12:41:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/149032808' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 12:41:36 np0005596060 jovial_mayer[87366]: 
Jan 26 12:41:36 np0005596060 jovial_mayer[87366]: {"fsid":"d4cd1917-5876-51b6-bc64-65a16199754d","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":161,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1769449281,"num_in_osds":2,"osd_in_since":1769449258,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":894656512,"bytes_avail":14129340416,"bytes_total":15023996928},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-26T17:40:45.961321+0000","services":{}},"progress_events":{}}
Jan 26 12:41:36 np0005596060 systemd[1]: libpod-f92803ddbe4b74fc6904f562b354340207be2985a56c94493bad31ac32defaee.scope: Deactivated successfully.
Jan 26 12:41:36 np0005596060 podman[87350]: 2026-01-26 17:41:36.219051148 +0000 UTC m=+1.042953547 container died f92803ddbe4b74fc6904f562b354340207be2985a56c94493bad31ac32defaee (image=quay.io/ceph/ceph:v18, name=jovial_mayer, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:36 np0005596060 systemd[1]: var-lib-containers-storage-overlay-16596e378838fa34be3121960c1039b46b2b622e9bb10b98743a1f88e7195f2f-merged.mount: Deactivated successfully.
Jan 26 12:41:36 np0005596060 podman[87350]: 2026-01-26 17:41:36.26444984 +0000 UTC m=+1.088352219 container remove f92803ddbe4b74fc6904f562b354340207be2985a56c94493bad31ac32defaee (image=quay.io/ceph/ceph:v18, name=jovial_mayer, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:41:36 np0005596060 systemd[1]: libpod-conmon-f92803ddbe4b74fc6904f562b354340207be2985a56c94493bad31ac32defaee.scope: Deactivated successfully.
Jan 26 12:41:36 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:36 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:36 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:36 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:36 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 12:41:36 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:41:36 np0005596060 python3[87428]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:41:36 np0005596060 podman[87429]: 2026-01-26 17:41:36.796371816 +0000 UTC m=+0.024661641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:41:36 np0005596060 podman[87429]: 2026-01-26 17:41:36.916882305 +0000 UTC m=+0.145172110 container create 841bebde4f705c159d5bd88c53f3631353193f392dd5b9e973bc774e70c9397e (image=quay.io/ceph/ceph:v18, name=upbeat_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 12:41:36 np0005596060 systemd[1]: Started libpod-conmon-841bebde4f705c159d5bd88c53f3631353193f392dd5b9e973bc774e70c9397e.scope.
Jan 26 12:41:37 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce67c3d25897481537c282c58d7f0b33f7c81fc7d16ab4d433822600e1d4250/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bce67c3d25897481537c282c58d7f0b33f7c81fc7d16ab4d433822600e1d4250/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:37 np0005596060 podman[87429]: 2026-01-26 17:41:37.042643653 +0000 UTC m=+0.270933478 container init 841bebde4f705c159d5bd88c53f3631353193f392dd5b9e973bc774e70c9397e (image=quay.io/ceph/ceph:v18, name=upbeat_snyder, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:41:37 np0005596060 podman[87429]: 2026-01-26 17:41:37.048768334 +0000 UTC m=+0.277058139 container start 841bebde4f705c159d5bd88c53f3631353193f392dd5b9e973bc774e70c9397e (image=quay.io/ceph/ceph:v18, name=upbeat_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:41:37 np0005596060 podman[87429]: 2026-01-26 17:41:37.052216049 +0000 UTC m=+0.280505854 container attach 841bebde4f705c159d5bd88c53f3631353193f392dd5b9e973bc774e70c9397e (image=quay.io/ceph/ceph:v18, name=upbeat_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:41:37 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:41:37 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:41:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 26 12:41:37 np0005596060 ceph-mon[74267]: Updating compute-2:/etc/ceph/ceph.conf
Jan 26 12:41:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 26 12:41:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3706561662' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:38 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 12:41:38 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 12:41:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 26 12:41:38 np0005596060 ceph-mon[74267]: Updating compute-2:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:41:38 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3706561662' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3706561662' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 26 12:41:38 np0005596060 upbeat_snyder[87444]: pool 'vms' created
Jan 26 12:41:38 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 26 12:41:38 np0005596060 systemd[1]: libpod-841bebde4f705c159d5bd88c53f3631353193f392dd5b9e973bc774e70c9397e.scope: Deactivated successfully.
Jan 26 12:41:38 np0005596060 podman[87429]: 2026-01-26 17:41:38.445437693 +0000 UTC m=+1.673727518 container died 841bebde4f705c159d5bd88c53f3631353193f392dd5b9e973bc774e70c9397e (image=quay.io/ceph/ceph:v18, name=upbeat_snyder, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 12:41:38 np0005596060 systemd[1]: var-lib-containers-storage-overlay-bce67c3d25897481537c282c58d7f0b33f7c81fc7d16ab4d433822600e1d4250-merged.mount: Deactivated successfully.
Jan 26 12:41:38 np0005596060 podman[87429]: 2026-01-26 17:41:38.498457684 +0000 UTC m=+1.726747489 container remove 841bebde4f705c159d5bd88c53f3631353193f392dd5b9e973bc774e70c9397e (image=quay.io/ceph/ceph:v18, name=upbeat_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:41:38 np0005596060 systemd[1]: libpod-conmon-841bebde4f705c159d5bd88c53f3631353193f392dd5b9e973bc774e70c9397e.scope: Deactivated successfully.
Jan 26 12:41:38 np0005596060 python3[87508]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:41:38 np0005596060 podman[87509]: 2026-01-26 17:41:38.86935173 +0000 UTC m=+0.050645632 container create d2ef986c7d54922e89dcce2f21082a738795c8700b2b157e9c9e4fd9b7f1a368 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:41:38 np0005596060 systemd[1]: Started libpod-conmon-d2ef986c7d54922e89dcce2f21082a738795c8700b2b157e9c9e4fd9b7f1a368.scope.
Jan 26 12:41:38 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7186f7279d81fdbb0d24c3c4524b398710bb8ebb60fcea20d8f33f52f7c3c6dc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7186f7279d81fdbb0d24c3c4524b398710bb8ebb60fcea20d8f33f52f7c3c6dc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:38 np0005596060 podman[87509]: 2026-01-26 17:41:38.84791137 +0000 UTC m=+0.029205292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:41:38 np0005596060 podman[87509]: 2026-01-26 17:41:38.952379252 +0000 UTC m=+0.133673174 container init d2ef986c7d54922e89dcce2f21082a738795c8700b2b157e9c9e4fd9b7f1a368 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 12:41:38 np0005596060 podman[87509]: 2026-01-26 17:41:38.961804935 +0000 UTC m=+0.143098877 container start d2ef986c7d54922e89dcce2f21082a738795c8700b2b157e9c9e4fd9b7f1a368 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:41:38 np0005596060 podman[87509]: 2026-01-26 17:41:38.965726272 +0000 UTC m=+0.147020204 container attach d2ef986c7d54922e89dcce2f21082a738795c8700b2b157e9c9e4fd9b7f1a368 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:41:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v64: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 26 12:41:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 26 12:41:39 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:41:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 26 12:41:39 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 26 12:41:39 np0005596060 ceph-mon[74267]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 26 12:41:39 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3706561662' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:39 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.client.admin.keyring
Jan 26 12:41:39 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.client.admin.keyring
Jan 26 12:41:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 26 12:41:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1447052594' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1447052594' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 26 12:41:40 np0005596060 vigorous_zhukovsky[87524]: pool 'volumes' created
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: Updating compute-2:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.client.admin.keyring
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1447052594' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:40 np0005596060 systemd[1]: libpod-d2ef986c7d54922e89dcce2f21082a738795c8700b2b157e9c9e4fd9b7f1a368.scope: Deactivated successfully.
Jan 26 12:41:40 np0005596060 podman[87509]: 2026-01-26 17:41:40.462743231 +0000 UTC m=+1.644037133 container died d2ef986c7d54922e89dcce2f21082a738795c8700b2b157e9c9e4fd9b7f1a368 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:40 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7186f7279d81fdbb0d24c3c4524b398710bb8ebb60fcea20d8f33f52f7c3c6dc-merged.mount: Deactivated successfully.
Jan 26 12:41:40 np0005596060 podman[87509]: 2026-01-26 17:41:40.518885608 +0000 UTC m=+1.700179510 container remove d2ef986c7d54922e89dcce2f21082a738795c8700b2b157e9c9e4fd9b7f1a368 (image=quay.io/ceph/ceph:v18, name=vigorous_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:41:40 np0005596060 systemd[1]: libpod-conmon-d2ef986c7d54922e89dcce2f21082a738795c8700b2b157e9c9e4fd9b7f1a368.scope: Deactivated successfully.
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v67: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 26 12:41:40 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 44782fbf-e3b7-431e-8fa8-c99967f41b1e (Updating mon deployment (+2 -> 3))
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:41:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:41:40 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 26 12:41:40 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 26 12:41:40 np0005596060 python3[87587]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:41:40 np0005596060 podman[87588]: 2026-01-26 17:41:40.91748776 +0000 UTC m=+0.043250680 container create 87db8b78c18decf3e1dd3ca52b660503c7ba17f960cf0de57049e7e348dae82b (image=quay.io/ceph/ceph:v18, name=modest_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:41:40 np0005596060 systemd[1]: Started libpod-conmon-87db8b78c18decf3e1dd3ca52b660503c7ba17f960cf0de57049e7e348dae82b.scope.
Jan 26 12:41:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8964305c01ac3a13d98da21f1034bbe405635269478a64ebac9c412985521d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8964305c01ac3a13d98da21f1034bbe405635269478a64ebac9c412985521d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:40 np0005596060 podman[87588]: 2026-01-26 17:41:40.994596165 +0000 UTC m=+0.120359105 container init 87db8b78c18decf3e1dd3ca52b660503c7ba17f960cf0de57049e7e348dae82b (image=quay.io/ceph/ceph:v18, name=modest_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:41:40 np0005596060 podman[87588]: 2026-01-26 17:41:40.90049853 +0000 UTC m=+0.026261460 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:41:41 np0005596060 podman[87588]: 2026-01-26 17:41:41.001459065 +0000 UTC m=+0.127221985 container start 87db8b78c18decf3e1dd3ca52b660503c7ba17f960cf0de57049e7e348dae82b (image=quay.io/ceph/ceph:v18, name=modest_pare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:41:41 np0005596060 podman[87588]: 2026-01-26 17:41:41.004236944 +0000 UTC m=+0.129999864 container attach 87db8b78c18decf3e1dd3ca52b660503c7ba17f960cf0de57049e7e348dae82b (image=quay.io/ceph/ceph:v18, name=modest_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:41:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1447052594' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1106791732' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1106791732' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 26 12:41:41 np0005596060 modest_pare[87604]: pool 'backups' created
Jan 26 12:41:41 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 26 12:41:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:41 np0005596060 systemd[1]: libpod-87db8b78c18decf3e1dd3ca52b660503c7ba17f960cf0de57049e7e348dae82b.scope: Deactivated successfully.
Jan 26 12:41:41 np0005596060 podman[87588]: 2026-01-26 17:41:41.788638601 +0000 UTC m=+0.914401521 container died 87db8b78c18decf3e1dd3ca52b660503c7ba17f960cf0de57049e7e348dae82b (image=quay.io/ceph/ceph:v18, name=modest_pare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:41:41 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c8964305c01ac3a13d98da21f1034bbe405635269478a64ebac9c412985521d4-merged.mount: Deactivated successfully.
Jan 26 12:41:41 np0005596060 podman[87588]: 2026-01-26 17:41:41.832071544 +0000 UTC m=+0.957834464 container remove 87db8b78c18decf3e1dd3ca52b660503c7ba17f960cf0de57049e7e348dae82b (image=quay.io/ceph/ceph:v18, name=modest_pare, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 12:41:41 np0005596060 systemd[1]: libpod-conmon-87db8b78c18decf3e1dd3ca52b660503c7ba17f960cf0de57049e7e348dae82b.scope: Deactivated successfully.
Jan 26 12:41:42 np0005596060 python3[87668]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:41:42 np0005596060 podman[87669]: 2026-01-26 17:41:42.219043028 +0000 UTC m=+0.047293380 container create 6086467b64a7801f2e1e08799d3c8590e2ab42cefe7d11aac2ce711583e457f0 (image=quay.io/ceph/ceph:v18, name=awesome_mendeleev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:41:42 np0005596060 systemd[1]: Started libpod-conmon-6086467b64a7801f2e1e08799d3c8590e2ab42cefe7d11aac2ce711583e457f0.scope.
Jan 26 12:41:42 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8664687089659f01e5e56d757f7fafbf26508f1e3617d50d2d7d53c1cdd03c8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8664687089659f01e5e56d757f7fafbf26508f1e3617d50d2d7d53c1cdd03c8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:42 np0005596060 podman[87669]: 2026-01-26 17:41:42.196005749 +0000 UTC m=+0.024256141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:41:42 np0005596060 podman[87669]: 2026-01-26 17:41:42.295918488 +0000 UTC m=+0.124168860 container init 6086467b64a7801f2e1e08799d3c8590e2ab42cefe7d11aac2ce711583e457f0 (image=quay.io/ceph/ceph:v18, name=awesome_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 12:41:42 np0005596060 podman[87669]: 2026-01-26 17:41:42.302340537 +0000 UTC m=+0.130590879 container start 6086467b64a7801f2e1e08799d3c8590e2ab42cefe7d11aac2ce711583e457f0 (image=quay.io/ceph/ceph:v18, name=awesome_mendeleev, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:41:42 np0005596060 podman[87669]: 2026-01-26 17:41:42.306082969 +0000 UTC m=+0.134333341 container attach 6086467b64a7801f2e1e08799d3c8590e2ab42cefe7d11aac2ce711583e457f0 (image=quay.io/ceph/ceph:v18, name=awesome_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:41:42 np0005596060 ceph-mon[74267]: Deploying daemon mon.compute-2 on compute-2
Jan 26 12:41:42 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1106791732' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:42 np0005596060 ceph-mon[74267]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 26 12:41:42 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1106791732' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v69: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:41:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 26 12:41:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 26 12:41:42 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 26 12:41:42 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 26 12:41:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3575996095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:41:43 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 26 12:41:43 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 26 12:41:43 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4020095797; not ready for session (expect reconnect)
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 12:41:43 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3575996095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 12:41:43 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 26 12:41:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:41:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:41:43
Jan 26 12:41:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:41:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Some PGs (0.250000) are inactive; try again later
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 12:41:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Jan 26 12:41:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4020095797; not ready for session (expect reconnect)
Jan 26 12:41:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 26 12:41:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 12:41:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v71: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:41:45 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4020095797; not ready for session (expect reconnect)
Jan 26 12:41:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 26 12:41:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 12:41:45 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 12:41:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:41:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 26 12:41:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 26 12:41:46 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 26 12:41:46 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 26 12:41:46 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4020095797; not ready for session (expect reconnect)
Jan 26 12:41:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 26 12:41:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 12:41:46 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 12:41:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v72: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:41:47 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:47 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 26 12:41:47 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4020095797; not ready for session (expect reconnect)
Jan 26 12:41:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 26 12:41:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 12:41:47 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4020095797; not ready for session (expect reconnect)
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v73: 4 pgs: 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap 
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mbryrf(active, since 2m)
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 44782fbf-e3b7-431e-8fa8-c99967f41b1e (Updating mon deployment (+2 -> 3))
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 44782fbf-e3b7-431e-8fa8-c99967f41b1e (Updating mon deployment (+2 -> 3)) in 8 seconds
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 3a45219b-7109-494c-96ef-433e2a1901ff (Updating mgr deployment (+2 -> 3))
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.cchxrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cchxrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cchxrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.cchxrf on compute-2
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.cchxrf on compute-2
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3575996095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Jan 26 12:41:48 np0005596060 awesome_mendeleev[87685]: pool 'images' created
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: Deploying daemon mon.compute-1 on compute-1
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3575996095' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0 calling monitor election
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-2 calling monitor election
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Jan 26 12:41:48 np0005596060 ceph-mon[74267]:    application not enabled on pool 'vms'
Jan 26 12:41:48 np0005596060 ceph-mon[74267]:    application not enabled on pool 'volumes'
Jan 26 12:41:48 np0005596060 ceph-mon[74267]:    application not enabled on pool 'backups'
Jan 26 12:41:48 np0005596060 ceph-mon[74267]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cchxrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Jan 26 12:41:48 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev b2775f92-e6f7-4987-9a12-d5dc1ba936d9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Jan 26 12:41:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:41:48 np0005596060 systemd[1]: libpod-6086467b64a7801f2e1e08799d3c8590e2ab42cefe7d11aac2ce711583e457f0.scope: Deactivated successfully.
Jan 26 12:41:48 np0005596060 podman[87669]: 2026-01-26 17:41:48.8868257 +0000 UTC m=+6.715076052 container died 6086467b64a7801f2e1e08799d3c8590e2ab42cefe7d11aac2ce711583e457f0 (image=quay.io/ceph/ceph:v18, name=awesome_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 12:41:48 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a8664687089659f01e5e56d757f7fafbf26508f1e3617d50d2d7d53c1cdd03c8-merged.mount: Deactivated successfully.
Jan 26 12:41:48 np0005596060 podman[87669]: 2026-01-26 17:41:48.944499815 +0000 UTC m=+6.772750157 container remove 6086467b64a7801f2e1e08799d3c8590e2ab42cefe7d11aac2ce711583e457f0 (image=quay.io/ceph/ceph:v18, name=awesome_mendeleev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 12:41:48 np0005596060 systemd[1]: libpod-conmon-6086467b64a7801f2e1e08799d3c8590e2ab42cefe7d11aac2ce711583e457f0.scope: Deactivated successfully.
Jan 26 12:41:49 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:49 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 26 12:41:49 np0005596060 python3[87753]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:41:49 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 3 completed events
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:41:49 np0005596060 podman[87754]: 2026-01-26 17:41:49.272998954 +0000 UTC m=+0.025422439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:41:49 np0005596060 podman[87754]: 2026-01-26 17:41:49.392149829 +0000 UTC m=+0.144573334 container create 682241c1fff7c46a2973589d4fdd4c498994c820d756f1f1a45d3c3e20a4065f (image=quay.io/ceph/ceph:v18, name=agitated_goldwasser, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:49 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 248af00d-20dd-40c5-9c8c-511ff88af765 (Global Recovery Event) in 5 seconds
Jan 26 12:41:49 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 19 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:49 np0005596060 systemd[1]: Started libpod-conmon-682241c1fff7c46a2973589d4fdd4c498994c820d756f1f1a45d3c3e20a4065f.scope.
Jan 26 12:41:49 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebaf5073d7365d8170a66dacfa608411312c99b879c2eee6aa05d0d2d0858fe2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebaf5073d7365d8170a66dacfa608411312c99b879c2eee6aa05d0d2d0858fe2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:49 np0005596060 podman[87754]: 2026-01-26 17:41:49.486937932 +0000 UTC m=+0.239361497 container init 682241c1fff7c46a2973589d4fdd4c498994c820d756f1f1a45d3c3e20a4065f (image=quay.io/ceph/ceph:v18, name=agitated_goldwasser, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:49 np0005596060 podman[87754]: 2026-01-26 17:41:49.492572761 +0000 UTC m=+0.244996246 container start 682241c1fff7c46a2973589d4fdd4c498994c820d756f1f1a45d3c3e20a4065f (image=quay.io/ceph/ceph:v18, name=agitated_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:41:49 np0005596060 podman[87754]: 2026-01-26 17:41:49.495875362 +0000 UTC m=+0.248298927 container attach 682241c1fff7c46a2973589d4fdd4c498994c820d756f1f1a45d3c3e20a4065f (image=quay.io/ceph/ceph:v18, name=agitated_goldwasser, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:49 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/4020095797; not ready for session (expect reconnect)
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cchxrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: Deploying daemon mgr.compute-2.cchxrf on compute-2
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3575996095' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Jan 26 12:41:49 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev f675e9ca-603a-46b6-bd5d-ba5f263e56a3 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 26 12:41:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:41:49 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 20 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [1] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3490187801' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:50 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:50 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3490187801' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 26 12:41:50 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:41:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v76: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 12:41:51 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:51 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 12:41:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 12:41:52 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:52 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 12:41:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 12:41:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v77: 5 pgs: 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:41:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:41:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:41:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:53 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:53 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 12:41:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 12:41:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 12:41:54 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:54 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 12:41:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 26 12:41:54 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 4 completed events
Jan 26 12:41:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:41:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v78: 5 pgs: 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:41:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:41:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:41:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap 
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.mbryrf(active, since 2m)
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.qpyzhk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qpyzhk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3490187801' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Jan 26 12:41:55 np0005596060 agitated_goldwasser[87769]: pool 'cephfs.cephfs.meta' created
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 2dfb8142-763d-4c3a-a94e-c963c865011f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev b2775f92-e6f7-4987-9a12-d5dc1ba936d9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event b2775f92-e6f7-4987-9a12-d5dc1ba936d9 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev f675e9ca-603a-46b6-bd5d-ba5f263e56a3 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event f675e9ca-603a-46b6-bd5d-ba5f263e56a3 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 5 seconds
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 2dfb8142-763d-4c3a-a94e-c963c865011f (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 2dfb8142-763d-4c3a-a94e-c963c865011f (PG autoscaler increasing pool 4 PGs from 1 to 32) in 0 seconds
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3490187801' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0 calling monitor election
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-2 calling monitor election
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-1 calling monitor election
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Jan 26 12:41:55 np0005596060 ceph-mon[74267]:    application not enabled on pool 'vms'
Jan 26 12:41:55 np0005596060 ceph-mon[74267]:    application not enabled on pool 'volumes'
Jan 26 12:41:55 np0005596060 ceph-mon[74267]:    application not enabled on pool 'backups'
Jan 26 12:41:55 np0005596060 ceph-mon[74267]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qpyzhk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qpyzhk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 12:41:55 np0005596060 systemd[1]: libpod-682241c1fff7c46a2973589d4fdd4c498994c820d756f1f1a45d3c3e20a4065f.scope: Deactivated successfully.
Jan 26 12:41:55 np0005596060 podman[87754]: 2026-01-26 17:41:55.236041549 +0000 UTC m=+5.988465034 container died 682241c1fff7c46a2973589d4fdd4c498994c820d756f1f1a45d3c3e20a4065f (image=quay.io/ceph/ceph:v18, name=agitated_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:41:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.qpyzhk on compute-1
Jan 26 12:41:55 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.qpyzhk on compute-1
Jan 26 12:41:55 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ebaf5073d7365d8170a66dacfa608411312c99b879c2eee6aa05d0d2d0858fe2-merged.mount: Deactivated successfully.
Jan 26 12:41:55 np0005596060 systemd[75887]: Starting Mark boot as successful...
Jan 26 12:41:55 np0005596060 systemd[75887]: Finished Mark boot as successful.
Jan 26 12:41:55 np0005596060 podman[87754]: 2026-01-26 17:41:55.298754599 +0000 UTC m=+6.051178064 container remove 682241c1fff7c46a2973589d4fdd4c498994c820d756f1f1a45d3c3e20a4065f (image=quay.io/ceph/ceph:v18, name=agitated_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 12:41:55 np0005596060 systemd[1]: libpod-conmon-682241c1fff7c46a2973589d4fdd4c498994c820d756f1f1a45d3c3e20a4065f.scope: Deactivated successfully.
Jan 26 12:41:55 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 21 pg[6.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:55 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=21 pruub=10.373961449s) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active pruub 54.469459534s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:41:55 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=21 pruub=10.373961449s) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown pruub 54.469459534s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:55 np0005596060 python3[87838]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:41:55 np0005596060 podman[87839]: 2026-01-26 17:41:55.736366774 +0000 UTC m=+0.079139297 container create 7a4a809c723416f33c9b90eb0d11dd298c20a4609a62fb43e6f91c33a42026a4 (image=quay.io/ceph/ceph:v18, name=jovial_ramanujan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:41:55 np0005596060 podman[87839]: 2026-01-26 17:41:55.680546455 +0000 UTC m=+0.023318998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:41:55 np0005596060 systemd[1]: Started libpod-conmon-7a4a809c723416f33c9b90eb0d11dd298c20a4609a62fb43e6f91c33a42026a4.scope.
Jan 26 12:41:55 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9940e8795fd48eab2def45581d363d93a64edc49c76d94242bc0c850424cfaf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9940e8795fd48eab2def45581d363d93a64edc49c76d94242bc0c850424cfaf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:55 np0005596060 podman[87839]: 2026-01-26 17:41:55.903841704 +0000 UTC m=+0.246614247 container init 7a4a809c723416f33c9b90eb0d11dd298c20a4609a62fb43e6f91c33a42026a4 (image=quay.io/ceph/ceph:v18, name=jovial_ramanujan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:41:55 np0005596060 podman[87839]: 2026-01-26 17:41:55.910793565 +0000 UTC m=+0.253566088 container start 7a4a809c723416f33c9b90eb0d11dd298c20a4609a62fb43e6f91c33a42026a4 (image=quay.io/ceph/ceph:v18, name=jovial_ramanujan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:55 np0005596060 podman[87839]: 2026-01-26 17:41:55.976758006 +0000 UTC m=+0.319530559 container attach 7a4a809c723416f33c9b90eb0d11dd298c20a4609a62fb43e6f91c33a42026a4 (image=quay.io/ceph/ceph:v18, name=jovial_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 12:41:56 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/1159197082; not ready for session (expect reconnect)
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3490187801' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.qpyzhk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: Deploying daemon mgr.compute-1.qpyzhk on compute-1
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1c( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1d( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1b( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1f( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1e( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1a( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.9( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.8( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.4( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.3( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.2( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.6( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.7( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.a( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.5( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.c( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.d( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.e( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.f( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.b( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.10( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.11( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.12( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.13( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.14( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.15( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.16( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.17( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.18( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.19( empty local-lis/les=16/17 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1c( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1d( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1f( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1e( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.9( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.4( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.3( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.2( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[6.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [1] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.1a( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.6( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.7( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.a( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.c( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.e( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.5( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.d( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.0( empty local-lis/les=21/22 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.10( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.b( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.11( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.13( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.12( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.14( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.15( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.16( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.17( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.18( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.f( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 22 pg[3.19( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=16/16 les/c/f=17/17/0 sis=21) [1] r=0 lpr=21 pi=[16,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3814547469' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v81: 68 pgs: 1 peering, 63 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:41:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:57 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 3a45219b-7109-494c-96ef-433e2a1901ff (Updating mgr deployment (+2 -> 3))
Jan 26 12:41:57 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 3a45219b-7109-494c-96ef-433e2a1901ff (Updating mgr deployment (+2 -> 3)) in 8 seconds
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:57 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 99cc0c19-7ceb-4cea-b5a6-f376ecb6b27d (Updating crash deployment (+1 -> 3))
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:41:57 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 26 12:41:57 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 26 12:41:57 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3814547469' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Jan 26 12:41:57 np0005596060 jovial_ramanujan[87854]: pool 'cephfs.cephfs.data' created
Jan 26 12:41:57 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 26 12:41:57 np0005596060 systemd[1]: libpod-7a4a809c723416f33c9b90eb0d11dd298c20a4609a62fb43e6f91c33a42026a4.scope: Deactivated successfully.
Jan 26 12:41:57 np0005596060 conmon[87854]: conmon 7a4a809c723416f33c9b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a4a809c723416f33c9b90eb0d11dd298c20a4609a62fb43e6f91c33a42026a4.scope/container/memory.events
Jan 26 12:41:57 np0005596060 podman[87839]: 2026-01-26 17:41:57.371982959 +0000 UTC m=+1.714755492 container died 7a4a809c723416f33c9b90eb0d11dd298c20a4609a62fb43e6f91c33a42026a4 (image=quay.io/ceph/ceph:v18, name=jovial_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3814547469' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 12:41:57 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Jan 26 12:41:57 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 23 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23 pruub=9.053876877s) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active pruub 55.471595764s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:41:57 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 23 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23 pruub=9.053876877s) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown pruub 55.471595764s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a9940e8795fd48eab2def45581d363d93a64edc49c76d94242bc0c850424cfaf-merged.mount: Deactivated successfully.
Jan 26 12:41:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 26 12:41:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 26 12:41:58 np0005596060 ceph-mon[74267]: Deploying daemon crash.compute-2 on compute-2
Jan 26 12:41:58 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3814547469' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 26 12:41:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:41:58 np0005596060 podman[87839]: 2026-01-26 17:41:58.46374785 +0000 UTC m=+2.806520373 container remove 7a4a809c723416f33c9b90eb0d11dd298c20a4609a62fb43e6f91c33a42026a4 (image=quay.io/ceph/ceph:v18, name=jovial_ramanujan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 12:41:58 np0005596060 systemd[1]: libpod-conmon-7a4a809c723416f33c9b90eb0d11dd298c20a4609a62fb43e6f91c33a42026a4.scope: Deactivated successfully.
Jan 26 12:41:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v83: 100 pgs: 1 peering, 63 unknown, 36 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:41:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Jan 26 12:41:58 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Jan 26 12:41:58 np0005596060 python3[87919]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1e( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1f( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.10( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.11( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.12( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.13( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.14( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.15( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.17( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.16( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.8( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.9( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.a( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.b( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.d( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.7( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.c( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.2( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.6( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.5( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.f( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1d( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.3( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.4( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.e( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.19( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1c( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1a( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.18( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1b( empty local-lis/les=17/18 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1e( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.11( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.12( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.10( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 podman[87920]: 2026-01-26 17:41:58.942868662 +0000 UTC m=+0.088038997 container create 0d923bc9a20aa6ef3846ff8096dc6a365c53160c972a6988be3106f1139e9ec2 (image=quay.io/ceph/ceph:v18, name=stupefied_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.16( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.0( empty local-lis/les=23/24 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.17( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.7( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.4( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 24 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=17/17 les/c/f=18/18/0 sis=23) [1] r=0 lpr=23 pi=[17,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:41:58 np0005596060 podman[87920]: 2026-01-26 17:41:58.878483491 +0000 UTC m=+0.023653846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:41:59 np0005596060 systemd[1]: Started libpod-conmon-0d923bc9a20aa6ef3846ff8096dc6a365c53160c972a6988be3106f1139e9ec2.scope.
Jan 26 12:41:59 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:41:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c10fb503bf5514c8d9777f8375f22552b61b8dd1d111dc4c35e607dcec37ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2c10fb503bf5514c8d9777f8375f22552b61b8dd1d111dc4c35e607dcec37ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:41:59 np0005596060 podman[87920]: 2026-01-26 17:41:59.140615069 +0000 UTC m=+0.285785414 container init 0d923bc9a20aa6ef3846ff8096dc6a365c53160c972a6988be3106f1139e9ec2 (image=quay.io/ceph/ceph:v18, name=stupefied_lamport, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:41:59 np0005596060 podman[87920]: 2026-01-26 17:41:59.151313583 +0000 UTC m=+0.296483918 container start 0d923bc9a20aa6ef3846ff8096dc6a365c53160c972a6988be3106f1139e9ec2 (image=quay.io/ceph/ceph:v18, name=stupefied_lamport, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 12:41:59 np0005596060 podman[87920]: 2026-01-26 17:41:59.179219843 +0000 UTC m=+0.324390178 container attach 0d923bc9a20aa6ef3846ff8096dc6a365c53160c972a6988be3106f1139e9ec2 (image=quay.io/ceph/ceph:v18, name=stupefied_lamport, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:41:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 26 12:41:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Jan 26 12:41:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1605417848' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 26 12:42:00 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 8 completed events
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:42:00 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 26 12:42:00 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:00 np0005596060 ceph-mgr[74563]: [progress WARNING root] Starting Global Recovery Event,64 pgs not in active + clean state
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:00 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 99cc0c19-7ceb-4cea-b5a6-f376ecb6b27d (Updating crash deployment (+1 -> 3))
Jan 26 12:42:00 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 99cc0c19-7ceb-4cea-b5a6-f376ecb6b27d (Updating crash deployment (+1 -> 3)) in 3 seconds
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v86: 100 pgs: 1 peering, 63 unknown, 36 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:01 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 26 12:42:01 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 26 12:42:01 np0005596060 podman[88100]: 2026-01-26 17:42:01.352394753 +0000 UTC m=+0.037617811 container create d143b8f71b0b1157b45f894b1c67438fd4dfae93678d30962ea404cfe69f12c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 26 12:42:01 np0005596060 systemd[1]: Started libpod-conmon-d143b8f71b0b1157b45f894b1c67438fd4dfae93678d30962ea404cfe69f12c4.scope.
Jan 26 12:42:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:01 np0005596060 podman[88100]: 2026-01-26 17:42:01.336552562 +0000 UTC m=+0.021775640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:01 np0005596060 podman[88100]: 2026-01-26 17:42:01.433365073 +0000 UTC m=+0.118588181 container init d143b8f71b0b1157b45f894b1c67438fd4dfae93678d30962ea404cfe69f12c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 12:42:01 np0005596060 podman[88100]: 2026-01-26 17:42:01.43928794 +0000 UTC m=+0.124510998 container start d143b8f71b0b1157b45f894b1c67438fd4dfae93678d30962ea404cfe69f12c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:01 np0005596060 podman[88100]: 2026-01-26 17:42:01.442710964 +0000 UTC m=+0.127934032 container attach d143b8f71b0b1157b45f894b1c67438fd4dfae93678d30962ea404cfe69f12c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 12:42:01 np0005596060 hopeful_burnell[88116]: 167 167
Jan 26 12:42:01 np0005596060 systemd[1]: libpod-d143b8f71b0b1157b45f894b1c67438fd4dfae93678d30962ea404cfe69f12c4.scope: Deactivated successfully.
Jan 26 12:42:01 np0005596060 conmon[88116]: conmon d143b8f71b0b1157b45f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d143b8f71b0b1157b45f894b1c67438fd4dfae93678d30962ea404cfe69f12c4.scope/container/memory.events
Jan 26 12:42:01 np0005596060 podman[88121]: 2026-01-26 17:42:01.483559834 +0000 UTC m=+0.023637745 container died d143b8f71b0b1157b45f894b1c67438fd4dfae93678d30962ea404cfe69f12c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:01 np0005596060 systemd[1]: var-lib-containers-storage-overlay-04f50213da0045f8a57df4f8e5a708c530e585265bb37ea93d8217d9062cdf9d-merged.mount: Deactivated successfully.
Jan 26 12:42:01 np0005596060 podman[88121]: 2026-01-26 17:42:01.521660346 +0000 UTC m=+0.061738287 container remove d143b8f71b0b1157b45f894b1c67438fd4dfae93678d30962ea404cfe69f12c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 12:42:01 np0005596060 systemd[1]: libpod-conmon-d143b8f71b0b1157b45f894b1c67438fd4dfae93678d30962ea404cfe69f12c4.scope: Deactivated successfully.
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1605417848' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1605417848' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Jan 26 12:42:01 np0005596060 stupefied_lamport[87935]: enabled application 'rbd' on pool 'vms'
Jan 26 12:42:01 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Jan 26 12:42:01 np0005596060 systemd[1]: libpod-0d923bc9a20aa6ef3846ff8096dc6a365c53160c972a6988be3106f1139e9ec2.scope: Deactivated successfully.
Jan 26 12:42:01 np0005596060 podman[87920]: 2026-01-26 17:42:01.637072288 +0000 UTC m=+2.782242633 container died 0d923bc9a20aa6ef3846ff8096dc6a365c53160c972a6988be3106f1139e9ec2 (image=quay.io/ceph/ceph:v18, name=stupefied_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:01 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a2c10fb503bf5514c8d9777f8375f22552b61b8dd1d111dc4c35e607dcec37ba-merged.mount: Deactivated successfully.
Jan 26 12:42:01 np0005596060 podman[87920]: 2026-01-26 17:42:01.679829595 +0000 UTC m=+2.824999930 container remove 0d923bc9a20aa6ef3846ff8096dc6a365c53160c972a6988be3106f1139e9ec2 (image=quay.io/ceph/ceph:v18, name=stupefied_lamport, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:01 np0005596060 systemd[1]: libpod-conmon-0d923bc9a20aa6ef3846ff8096dc6a365c53160c972a6988be3106f1139e9ec2.scope: Deactivated successfully.
Jan 26 12:42:01 np0005596060 podman[88144]: 2026-01-26 17:42:01.719473665 +0000 UTC m=+0.078848960 container create c428ef2cf572c2bf5e3b0a39bf43a70fd186d7d803f54f76f02ba0753e012dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:01 np0005596060 systemd[1]: Started libpod-conmon-c428ef2cf572c2bf5e3b0a39bf43a70fd186d7d803f54f76f02ba0753e012dcb.scope.
Jan 26 12:42:01 np0005596060 podman[88144]: 2026-01-26 17:42:01.697750448 +0000 UTC m=+0.057125773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4254c1267db794a09dc05b14a5668bb71e5b6ba8f192dacefb68c91841bd69b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4254c1267db794a09dc05b14a5668bb71e5b6ba8f192dacefb68c91841bd69b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4254c1267db794a09dc05b14a5668bb71e5b6ba8f192dacefb68c91841bd69b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4254c1267db794a09dc05b14a5668bb71e5b6ba8f192dacefb68c91841bd69b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4254c1267db794a09dc05b14a5668bb71e5b6ba8f192dacefb68c91841bd69b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:01 np0005596060 podman[88144]: 2026-01-26 17:42:01.835685187 +0000 UTC m=+0.195060502 container init c428ef2cf572c2bf5e3b0a39bf43a70fd186d7d803f54f76f02ba0753e012dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:42:01 np0005596060 podman[88144]: 2026-01-26 17:42:01.843326816 +0000 UTC m=+0.202702111 container start c428ef2cf572c2bf5e3b0a39bf43a70fd186d7d803f54f76f02ba0753e012dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 12:42:01 np0005596060 podman[88144]: 2026-01-26 17:42:01.846423202 +0000 UTC m=+0.205798487 container attach c428ef2cf572c2bf5e3b0a39bf43a70fd186d7d803f54f76f02ba0753e012dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 12:42:01 np0005596060 python3[88202]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:02 np0005596060 podman[88204]: 2026-01-26 17:42:02.056294019 +0000 UTC m=+0.059174453 container create 3103fdd100ba1a476f638816de6f847417047cef9635b3faffd2b05df2485caf (image=quay.io/ceph/ceph:v18, name=admiring_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:02 np0005596060 podman[88204]: 2026-01-26 17:42:02.025765175 +0000 UTC m=+0.028645619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:02 np0005596060 systemd[1]: Started libpod-conmon-3103fdd100ba1a476f638816de6f847417047cef9635b3faffd2b05df2485caf.scope.
Jan 26 12:42:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0371cc3481ab2770ee04aed3c9cc60116a064531ae35be56f3f9812bf80f130/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0371cc3481ab2770ee04aed3c9cc60116a064531ae35be56f3f9812bf80f130/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:02 np0005596060 podman[88204]: 2026-01-26 17:42:02.393007061 +0000 UTC m=+0.395887485 container init 3103fdd100ba1a476f638816de6f847417047cef9635b3faffd2b05df2485caf (image=quay.io/ceph/ceph:v18, name=admiring_haibt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:02 np0005596060 podman[88204]: 2026-01-26 17:42:02.416192154 +0000 UTC m=+0.419072578 container start 3103fdd100ba1a476f638816de6f847417047cef9635b3faffd2b05df2485caf (image=quay.io/ceph/ceph:v18, name=admiring_haibt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 12:42:02 np0005596060 podman[88204]: 2026-01-26 17:42:02.636994261 +0000 UTC m=+0.639874705 container attach 3103fdd100ba1a476f638816de6f847417047cef9635b3faffd2b05df2485caf (image=quay.io/ceph/ceph:v18, name=admiring_haibt, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:42:02 np0005596060 sad_ellis[88175]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:42:02 np0005596060 sad_ellis[88175]: --> relative data size: 1.0
Jan 26 12:42:02 np0005596060 sad_ellis[88175]: --> All data devices are unavailable
Jan 26 12:42:02 np0005596060 systemd[1]: libpod-c428ef2cf572c2bf5e3b0a39bf43a70fd186d7d803f54f76f02ba0753e012dcb.scope: Deactivated successfully.
Jan 26 12:42:02 np0005596060 podman[88144]: 2026-01-26 17:42:02.689786146 +0000 UTC m=+1.049161441 container died c428ef2cf572c2bf5e3b0a39bf43a70fd186d7d803f54f76f02ba0753e012dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 12:42:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v88: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1605417848' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1717325685' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "9cf3a1cc-aed3-427e-a898-1ddf0c091222"} v 0) v1
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9cf3a1cc-aed3-427e-a898-1ddf0c091222"}]: dispatch
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Jan 26 12:42:03 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4254c1267db794a09dc05b14a5668bb71e5b6ba8f192dacefb68c91841bd69b1-merged.mount: Deactivated successfully.
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 26 12:42:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e27 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.1e( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.1f( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.1b( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.9( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.4( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.6( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.1( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.a( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.d( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.c( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.e( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.10( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.13( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.15( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[2.19( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.724902153s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.582618713s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.724862099s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.582618713s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.16( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.204085350s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061893463s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.15( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.204048157s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061874390s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.14( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203989983s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061862946s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.16( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.204040527s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061893463s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.15( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.204007149s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061874390s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773678780s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.631572723s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.14( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203953743s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061862946s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773631096s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.631572723s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.13( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203805923s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061813354s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.11( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203785896s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061809540s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773596764s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.631649017s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.13( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203771591s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061813354s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.11( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203766823s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061809540s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.10( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203695297s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061786652s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773568153s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.631649017s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.10( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203669548s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061786652s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.f( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203824043s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061954498s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773524284s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.631694794s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.f( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203789711s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061954498s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.e( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203528404s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061725616s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773508072s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.631694794s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.e( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203473091s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061725616s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.d( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203478813s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061759949s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.d( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203432083s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061759949s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773470879s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.631816864s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.c( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203351021s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061710358s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773452759s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.631816864s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773456573s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.631908417s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.c( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203321457s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061710358s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773440361s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.631908417s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773313522s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.631843567s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.a( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203120232s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061668396s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.a( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.203088760s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061668396s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773272514s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.631843567s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773458481s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.632137299s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773441315s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.632137299s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.5( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.202968597s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061676025s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.5( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.202951431s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061676025s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.772972107s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.631725311s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.3( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.202206612s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061000824s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.772926331s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.631725311s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.3( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.202188492s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061000824s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773110390s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.631961823s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.9( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.186306000s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.045196533s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.1a( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.202565193s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.061527252s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.1a( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.202548027s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.061527252s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773303032s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.632354736s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773077011s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.631961823s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.1c( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.185856819s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.045013428s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773023605s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.632209778s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.9( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.186094284s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.045196533s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.1c( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.185824394s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.045013428s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.772961617s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.632209778s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.773259163s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.632354736s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.772844315s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.632240295s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.772808075s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.632240295s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.1d( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.185586929s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active pruub 61.045032501s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[3.1d( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=27 pruub=8.185563087s) [0] r=-1 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 61.045032501s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.772734642s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 active pruub 63.632312775s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 27 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=27 pruub=10.772707939s) [0] r=-1 lpr=27 pi=[23,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.632312775s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1717325685' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9cf3a1cc-aed3-427e-a898-1ddf0c091222"}]': finished
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Jan 26 12:42:04 np0005596060 admiring_haibt[88220]: enabled application 'rbd' on pool 'volumes'
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:04 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:04 np0005596060 systemd[1]: libpod-3103fdd100ba1a476f638816de6f847417047cef9635b3faffd2b05df2485caf.scope: Deactivated successfully.
Jan 26 12:42:04 np0005596060 podman[88144]: 2026-01-26 17:42:04.219385 +0000 UTC m=+2.578760295 container remove c428ef2cf572c2bf5e3b0a39bf43a70fd186d7d803f54f76f02ba0753e012dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 12:42:04 np0005596060 podman[88204]: 2026-01-26 17:42:04.222069686 +0000 UTC m=+2.224950100 container died 3103fdd100ba1a476f638816de6f847417047cef9635b3faffd2b05df2485caf (image=quay.io/ceph/ceph:v18, name=admiring_haibt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.102:0/1601559491' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9cf3a1cc-aed3-427e-a898-1ddf0c091222"}]: dispatch
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1717325685' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "9cf3a1cc-aed3-427e-a898-1ddf0c091222"}]: dispatch
Jan 26 12:42:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:42:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a0371cc3481ab2770ee04aed3c9cc60116a064531ae35be56f3f9812bf80f130-merged.mount: Deactivated successfully.
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 26 12:42:04 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 26 12:42:04 np0005596060 podman[88204]: 2026-01-26 17:42:04.341916278 +0000 UTC m=+2.344796702 container remove 3103fdd100ba1a476f638816de6f847417047cef9635b3faffd2b05df2485caf (image=quay.io/ceph/ceph:v18, name=admiring_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 26 12:42:04 np0005596060 systemd[1]: libpod-conmon-3103fdd100ba1a476f638816de6f847417047cef9635b3faffd2b05df2485caf.scope: Deactivated successfully.
Jan 26 12:42:04 np0005596060 systemd[1]: libpod-conmon-c428ef2cf572c2bf5e3b0a39bf43a70fd186d7d803f54f76f02ba0753e012dcb.scope: Deactivated successfully.
Jan 26 12:42:04 np0005596060 python3[88400]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:04 np0005596060 podman[88405]: 2026-01-26 17:42:04.68672785 +0000 UTC m=+0.060260350 container create 27575705841f22d326eaabbb98739557bac58e847cd1521cf02cdfff2eda7946 (image=quay.io/ceph/ceph:v18, name=trusting_franklin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 12:42:04 np0005596060 systemd[1]: Started libpod-conmon-27575705841f22d326eaabbb98739557bac58e847cd1521cf02cdfff2eda7946.scope.
Jan 26 12:42:04 np0005596060 podman[88405]: 2026-01-26 17:42:04.665039484 +0000 UTC m=+0.038572074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:04 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd735251f137987f73a29ef57856ab90111b25d4e0f42f41b292ecde7575132a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd735251f137987f73a29ef57856ab90111b25d4e0f42f41b292ecde7575132a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v91: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:04 np0005596060 podman[88405]: 2026-01-26 17:42:04.773975817 +0000 UTC m=+0.147508327 container init 27575705841f22d326eaabbb98739557bac58e847cd1521cf02cdfff2eda7946 (image=quay.io/ceph/ceph:v18, name=trusting_franklin, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:42:04 np0005596060 podman[88405]: 2026-01-26 17:42:04.780943549 +0000 UTC m=+0.154476049 container start 27575705841f22d326eaabbb98739557bac58e847cd1521cf02cdfff2eda7946 (image=quay.io/ceph/ceph:v18, name=trusting_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 12:42:04 np0005596060 podman[88405]: 2026-01-26 17:42:04.784319662 +0000 UTC m=+0.157852202 container attach 27575705841f22d326eaabbb98739557bac58e847cd1521cf02cdfff2eda7946 (image=quay.io/ceph/ceph:v18, name=trusting_franklin, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:04 np0005596060 podman[88461]: 2026-01-26 17:42:04.8760571 +0000 UTC m=+0.059072231 container create b170314d79edc3db5e0743db44ba5d04478fe2b091b5b7089ce8e1f02dc6a687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 12:42:04 np0005596060 systemd[1]: Started libpod-conmon-b170314d79edc3db5e0743db44ba5d04478fe2b091b5b7089ce8e1f02dc6a687.scope.
Jan 26 12:42:04 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:04 np0005596060 podman[88461]: 2026-01-26 17:42:04.933632913 +0000 UTC m=+0.116648054 container init b170314d79edc3db5e0743db44ba5d04478fe2b091b5b7089ce8e1f02dc6a687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 12:42:04 np0005596060 podman[88461]: 2026-01-26 17:42:04.93884761 +0000 UTC m=+0.121862751 container start b170314d79edc3db5e0743db44ba5d04478fe2b091b5b7089ce8e1f02dc6a687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:42:04 np0005596060 podman[88461]: 2026-01-26 17:42:04.942131242 +0000 UTC m=+0.125146383 container attach b170314d79edc3db5e0743db44ba5d04478fe2b091b5b7089ce8e1f02dc6a687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 26 12:42:04 np0005596060 vigilant_cray[88478]: 167 167
Jan 26 12:42:04 np0005596060 systemd[1]: libpod-b170314d79edc3db5e0743db44ba5d04478fe2b091b5b7089ce8e1f02dc6a687.scope: Deactivated successfully.
Jan 26 12:42:04 np0005596060 podman[88461]: 2026-01-26 17:42:04.943796533 +0000 UTC m=+0.126811664 container died b170314d79edc3db5e0743db44ba5d04478fe2b091b5b7089ce8e1f02dc6a687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 12:42:04 np0005596060 podman[88461]: 2026-01-26 17:42:04.852800855 +0000 UTC m=+0.035816076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ff1a078907077ddefdbf36bc443420f2efa0d4b0a06b738a4e78ec2df385dfa3-merged.mount: Deactivated successfully.
Jan 26 12:42:04 np0005596060 podman[88461]: 2026-01-26 17:42:04.982605302 +0000 UTC m=+0.165620463 container remove b170314d79edc3db5e0743db44ba5d04478fe2b091b5b7089ce8e1f02dc6a687 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_cray, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 12:42:04 np0005596060 systemd[1]: libpod-conmon-b170314d79edc3db5e0743db44ba5d04478fe2b091b5b7089ce8e1f02dc6a687.scope: Deactivated successfully.
Jan 26 12:42:05 np0005596060 podman[88501]: 2026-01-26 17:42:05.158913739 +0000 UTC m=+0.046354426 container create 88345287fc3142cf90642370a93d314f5ee4a13296211fc3496c1af00da36c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 26 12:42:05 np0005596060 systemd[1]: Started libpod-conmon-88345287fc3142cf90642370a93d314f5ee4a13296211fc3496c1af00da36c8c.scope.
Jan 26 12:42:05 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d6403fb00cf307a4cdf7294e40f12cb38ebbc796303386116e580d450285c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d6403fb00cf307a4cdf7294e40f12cb38ebbc796303386116e580d450285c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d6403fb00cf307a4cdf7294e40f12cb38ebbc796303386116e580d450285c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4d6403fb00cf307a4cdf7294e40f12cb38ebbc796303386116e580d450285c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:05 np0005596060 podman[88501]: 2026-01-26 17:42:05.138910935 +0000 UTC m=+0.026351652 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:05 np0005596060 podman[88501]: 2026-01-26 17:42:05.239990753 +0000 UTC m=+0.127431520 container init 88345287fc3142cf90642370a93d314f5ee4a13296211fc3496c1af00da36c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 12:42:05 np0005596060 podman[88501]: 2026-01-26 17:42:05.24512429 +0000 UTC m=+0.132564957 container start 88345287fc3142cf90642370a93d314f5ee4a13296211fc3496c1af00da36c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:05 np0005596060 podman[88501]: 2026-01-26 17:42:05.24796876 +0000 UTC m=+0.135409467 container attach 88345287fc3142cf90642370a93d314f5ee4a13296211fc3496c1af00da36c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/814326835' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:05 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=27) [1] r=0 lpr=27 pi=[21,27)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:05 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 9 completed events
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1717325685' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "9cf3a1cc-aed3-427e-a898-1ddf0c091222"}]': finished
Jan 26 12:42:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:05 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 26cadeb7-7f09-472d-876d-a0c56b15d244 (Global Recovery Event) in 5 seconds
Jan 26 12:42:06 np0005596060 priceless_wing[88536]: {
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:    "1": [
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:        {
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "devices": [
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "/dev/loop3"
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            ],
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "lv_name": "ceph_lv0",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "lv_size": "7511998464",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "name": "ceph_lv0",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "tags": {
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.cluster_name": "ceph",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.crush_device_class": "",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.encrypted": "0",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.osd_id": "1",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.type": "block",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:                "ceph.vdo": "0"
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            },
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "type": "block",
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:            "vg_name": "ceph_vg0"
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:        }
Jan 26 12:42:06 np0005596060 priceless_wing[88536]:    ]
Jan 26 12:42:06 np0005596060 priceless_wing[88536]: }
Jan 26 12:42:06 np0005596060 systemd[1]: libpod-88345287fc3142cf90642370a93d314f5ee4a13296211fc3496c1af00da36c8c.scope: Deactivated successfully.
Jan 26 12:42:06 np0005596060 podman[88546]: 2026-01-26 17:42:06.135383143 +0000 UTC m=+0.024917257 container died 88345287fc3142cf90642370a93d314f5ee4a13296211fc3496c1af00da36c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 26 12:42:06 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a4d6403fb00cf307a4cdf7294e40f12cb38ebbc796303386116e580d450285c0-merged.mount: Deactivated successfully.
Jan 26 12:42:06 np0005596060 podman[88546]: 2026-01-26 17:42:06.178310914 +0000 UTC m=+0.067845008 container remove 88345287fc3142cf90642370a93d314f5ee4a13296211fc3496c1af00da36c8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 12:42:06 np0005596060 systemd[1]: libpod-conmon-88345287fc3142cf90642370a93d314f5ee4a13296211fc3496c1af00da36c8c.scope: Deactivated successfully.
Jan 26 12:42:06 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 26 12:42:06 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 26 12:42:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 26 12:42:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/814326835' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 26 12:42:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Jan 26 12:42:06 np0005596060 trusting_franklin[88453]: enabled application 'rbd' on pool 'backups'
Jan 26 12:42:06 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Jan 26 12:42:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:06 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:06 np0005596060 systemd[1]: libpod-27575705841f22d326eaabbb98739557bac58e847cd1521cf02cdfff2eda7946.scope: Deactivated successfully.
Jan 26 12:42:06 np0005596060 podman[88405]: 2026-01-26 17:42:06.554515962 +0000 UTC m=+1.928048462 container died 27575705841f22d326eaabbb98739557bac58e847cd1521cf02cdfff2eda7946 (image=quay.io/ceph/ceph:v18, name=trusting_franklin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:06 np0005596060 systemd[1]: var-lib-containers-storage-overlay-bd735251f137987f73a29ef57856ab90111b25d4e0f42f41b292ecde7575132a-merged.mount: Deactivated successfully.
Jan 26 12:42:06 np0005596060 podman[88405]: 2026-01-26 17:42:06.603603955 +0000 UTC m=+1.977136465 container remove 27575705841f22d326eaabbb98739557bac58e847cd1521cf02cdfff2eda7946 (image=quay.io/ceph/ceph:v18, name=trusting_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 12:42:06 np0005596060 systemd[1]: libpod-conmon-27575705841f22d326eaabbb98739557bac58e847cd1521cf02cdfff2eda7946.scope: Deactivated successfully.
Jan 26 12:42:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v94: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:06 np0005596060 podman[88732]: 2026-01-26 17:42:06.780741703 +0000 UTC m=+0.039495427 container create 8833c867a6434958d5e77746b857236ba83f8420b2f62462aa8837cdcdd6c88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 12:42:06 np0005596060 systemd[1]: Started libpod-conmon-8833c867a6434958d5e77746b857236ba83f8420b2f62462aa8837cdcdd6c88a.scope.
Jan 26 12:42:06 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:06 np0005596060 podman[88732]: 2026-01-26 17:42:06.762726568 +0000 UTC m=+0.021480382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:06 np0005596060 podman[88732]: 2026-01-26 17:42:06.862256418 +0000 UTC m=+0.121010212 container init 8833c867a6434958d5e77746b857236ba83f8420b2f62462aa8837cdcdd6c88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_poincare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 12:42:06 np0005596060 podman[88732]: 2026-01-26 17:42:06.872633074 +0000 UTC m=+0.131386798 container start 8833c867a6434958d5e77746b857236ba83f8420b2f62462aa8837cdcdd6c88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_poincare, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 12:42:06 np0005596060 podman[88732]: 2026-01-26 17:42:06.875623568 +0000 UTC m=+0.134377332 container attach 8833c867a6434958d5e77746b857236ba83f8420b2f62462aa8837cdcdd6c88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 12:42:06 np0005596060 xenodochial_poincare[88756]: 167 167
Jan 26 12:42:06 np0005596060 podman[88732]: 2026-01-26 17:42:06.878410617 +0000 UTC m=+0.137164351 container died 8833c867a6434958d5e77746b857236ba83f8420b2f62462aa8837cdcdd6c88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_poincare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:06 np0005596060 systemd[1]: libpod-8833c867a6434958d5e77746b857236ba83f8420b2f62462aa8837cdcdd6c88a.scope: Deactivated successfully.
Jan 26 12:42:06 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fb5f7055b602e595dc92e7ce03430e11f90ac37d462d7d3907045cce1660d7ea-merged.mount: Deactivated successfully.
Jan 26 12:42:06 np0005596060 python3[88751]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:06 np0005596060 podman[88732]: 2026-01-26 17:42:06.917255467 +0000 UTC m=+0.176009191 container remove 8833c867a6434958d5e77746b857236ba83f8420b2f62462aa8837cdcdd6c88a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_poincare, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:06 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/814326835' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 26 12:42:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:06 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/814326835' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 26 12:42:06 np0005596060 systemd[1]: libpod-conmon-8833c867a6434958d5e77746b857236ba83f8420b2f62462aa8837cdcdd6c88a.scope: Deactivated successfully.
Jan 26 12:42:06 np0005596060 podman[88773]: 2026-01-26 17:42:06.985160865 +0000 UTC m=+0.047828113 container create ae06371f0bc14900bd45c08984bf0bdc8d3539cfe7c6f1a869867505fcd63d41 (image=quay.io/ceph/ceph:v18, name=distracted_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:07 np0005596060 systemd[1]: Started libpod-conmon-ae06371f0bc14900bd45c08984bf0bdc8d3539cfe7c6f1a869867505fcd63d41.scope.
Jan 26 12:42:07 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2cc583e90856e45e9b7398b7ab75ab50fb0e417225b55f4295efd09b67692bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2cc583e90856e45e9b7398b7ab75ab50fb0e417225b55f4295efd09b67692bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:07 np0005596060 podman[88773]: 2026-01-26 17:42:06.967250943 +0000 UTC m=+0.029918191 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:07 np0005596060 podman[88773]: 2026-01-26 17:42:07.065424629 +0000 UTC m=+0.128091877 container init ae06371f0bc14900bd45c08984bf0bdc8d3539cfe7c6f1a869867505fcd63d41 (image=quay.io/ceph/ceph:v18, name=distracted_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 12:42:07 np0005596060 podman[88773]: 2026-01-26 17:42:07.071394856 +0000 UTC m=+0.134062084 container start ae06371f0bc14900bd45c08984bf0bdc8d3539cfe7c6f1a869867505fcd63d41 (image=quay.io/ceph/ceph:v18, name=distracted_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 12:42:07 np0005596060 podman[88773]: 2026-01-26 17:42:07.075232901 +0000 UTC m=+0.137900149 container attach ae06371f0bc14900bd45c08984bf0bdc8d3539cfe7c6f1a869867505fcd63d41 (image=quay.io/ceph/ceph:v18, name=distracted_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 12:42:07 np0005596060 podman[88797]: 2026-01-26 17:42:07.096683242 +0000 UTC m=+0.058917148 container create 783d72e75036b1291debaa7636389dd12aa187b7358f221ee3de22cb48e3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 12:42:07 np0005596060 systemd[1]: Started libpod-conmon-783d72e75036b1291debaa7636389dd12aa187b7358f221ee3de22cb48e3f2e3.scope.
Jan 26 12:42:07 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45bd63a8e5e9bf14b00dfaa0e3ec253bba0861235717258a21fb770fc0282d2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45bd63a8e5e9bf14b00dfaa0e3ec253bba0861235717258a21fb770fc0282d2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45bd63a8e5e9bf14b00dfaa0e3ec253bba0861235717258a21fb770fc0282d2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45bd63a8e5e9bf14b00dfaa0e3ec253bba0861235717258a21fb770fc0282d2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:07 np0005596060 podman[88797]: 2026-01-26 17:42:07.075360815 +0000 UTC m=+0.037594741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:07 np0005596060 podman[88797]: 2026-01-26 17:42:07.182334568 +0000 UTC m=+0.144568564 container init 783d72e75036b1291debaa7636389dd12aa187b7358f221ee3de22cb48e3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 12:42:07 np0005596060 podman[88797]: 2026-01-26 17:42:07.188285225 +0000 UTC m=+0.150519121 container start 783d72e75036b1291debaa7636389dd12aa187b7358f221ee3de22cb48e3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 12:42:07 np0005596060 podman[88797]: 2026-01-26 17:42:07.192466699 +0000 UTC m=+0.154700645 container attach 783d72e75036b1291debaa7636389dd12aa187b7358f221ee3de22cb48e3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:07 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 26 12:42:07 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 26 12:42:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Jan 26 12:42:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2848215916' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 26 12:42:07 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2848215916' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 26 12:42:08 np0005596060 distracted_mendel[88816]: {
Jan 26 12:42:08 np0005596060 distracted_mendel[88816]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:42:08 np0005596060 distracted_mendel[88816]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:42:08 np0005596060 distracted_mendel[88816]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:42:08 np0005596060 distracted_mendel[88816]:        "osd_id": 1,
Jan 26 12:42:08 np0005596060 distracted_mendel[88816]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:42:08 np0005596060 distracted_mendel[88816]:        "type": "bluestore"
Jan 26 12:42:08 np0005596060 distracted_mendel[88816]:    }
Jan 26 12:42:08 np0005596060 distracted_mendel[88816]: }
Jan 26 12:42:08 np0005596060 systemd[1]: libpod-783d72e75036b1291debaa7636389dd12aa187b7358f221ee3de22cb48e3f2e3.scope: Deactivated successfully.
Jan 26 12:42:08 np0005596060 podman[88857]: 2026-01-26 17:42:08.106697445 +0000 UTC m=+0.026856885 container died 783d72e75036b1291debaa7636389dd12aa187b7358f221ee3de22cb48e3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:08 np0005596060 systemd[1]: var-lib-containers-storage-overlay-45bd63a8e5e9bf14b00dfaa0e3ec253bba0861235717258a21fb770fc0282d2c-merged.mount: Deactivated successfully.
Jan 26 12:42:08 np0005596060 podman[88857]: 2026-01-26 17:42:08.156836634 +0000 UTC m=+0.076996044 container remove 783d72e75036b1291debaa7636389dd12aa187b7358f221ee3de22cb48e3f2e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 26 12:42:08 np0005596060 systemd[1]: libpod-conmon-783d72e75036b1291debaa7636389dd12aa187b7358f221ee3de22cb48e3f2e3.scope: Deactivated successfully.
Jan 26 12:42:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:42:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:42:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 26 12:42:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2848215916' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 26 12:42:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Jan 26 12:42:08 np0005596060 distracted_agnesi[88795]: enabled application 'rbd' on pool 'images'
Jan 26 12:42:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Jan 26 12:42:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:08 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:08 np0005596060 systemd[1]: libpod-ae06371f0bc14900bd45c08984bf0bdc8d3539cfe7c6f1a869867505fcd63d41.scope: Deactivated successfully.
Jan 26 12:42:08 np0005596060 podman[88773]: 2026-01-26 17:42:08.681613353 +0000 UTC m=+1.744280581 container died ae06371f0bc14900bd45c08984bf0bdc8d3539cfe7c6f1a869867505fcd63d41 (image=quay.io/ceph/ceph:v18, name=distracted_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v96: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a2cc583e90856e45e9b7398b7ab75ab50fb0e417225b55f4295efd09b67692bc-merged.mount: Deactivated successfully.
Jan 26 12:42:09 np0005596060 podman[88773]: 2026-01-26 17:42:09.107286564 +0000 UTC m=+2.169953792 container remove ae06371f0bc14900bd45c08984bf0bdc8d3539cfe7c6f1a869867505fcd63d41 (image=quay.io/ceph/ceph:v18, name=distracted_agnesi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 12:42:09 np0005596060 systemd[1]: libpod-conmon-ae06371f0bc14900bd45c08984bf0bdc8d3539cfe7c6f1a869867505fcd63d41.scope: Deactivated successfully.
Jan 26 12:42:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:09 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2848215916' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 26 12:42:09 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 26 12:42:09 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 26 12:42:09 np0005596060 python3[88910]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:09 np0005596060 podman[88911]: 2026-01-26 17:42:09.501028635 +0000 UTC m=+0.059341117 container create db767406afa3890be7718813a4c8d7f032c595009133f819e73bc6b02734c2bd (image=quay.io/ceph/ceph:v18, name=admiring_meitner, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:09 np0005596060 systemd[1]: Started libpod-conmon-db767406afa3890be7718813a4c8d7f032c595009133f819e73bc6b02734c2bd.scope.
Jan 26 12:42:09 np0005596060 podman[88911]: 2026-01-26 17:42:09.481649606 +0000 UTC m=+0.039961908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:09 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01978be9e12308795f5599a2fe8717e1bff6e46934ecca4626d787790782bb29/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01978be9e12308795f5599a2fe8717e1bff6e46934ecca4626d787790782bb29/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:09 np0005596060 podman[88911]: 2026-01-26 17:42:09.593651874 +0000 UTC m=+0.151964156 container init db767406afa3890be7718813a4c8d7f032c595009133f819e73bc6b02734c2bd (image=quay.io/ceph/ceph:v18, name=admiring_meitner, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 26 12:42:09 np0005596060 podman[88911]: 2026-01-26 17:42:09.60439873 +0000 UTC m=+0.162711052 container start db767406afa3890be7718813a4c8d7f032c595009133f819e73bc6b02734c2bd (image=quay.io/ceph/ceph:v18, name=admiring_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:09 np0005596060 podman[88911]: 2026-01-26 17:42:09.608997364 +0000 UTC m=+0.167309646 container attach db767406afa3890be7718813a4c8d7f032c595009133f819e73bc6b02734c2bd (image=quay.io/ceph/ceph:v18, name=admiring_meitner, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 12:42:09 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/694454554' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/694454554' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:10 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 26 12:42:10 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/694454554' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Jan 26 12:42:10 np0005596060 admiring_meitner[88926]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:10 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:10 np0005596060 systemd[1]: libpod-db767406afa3890be7718813a4c8d7f032c595009133f819e73bc6b02734c2bd.scope: Deactivated successfully.
Jan 26 12:42:10 np0005596060 podman[88911]: 2026-01-26 17:42:10.741148755 +0000 UTC m=+1.299461067 container died db767406afa3890be7718813a4c8d7f032c595009133f819e73bc6b02734c2bd (image=quay.io/ceph/ceph:v18, name=admiring_meitner, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 12:42:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v98: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:10 np0005596060 systemd[1]: var-lib-containers-storage-overlay-01978be9e12308795f5599a2fe8717e1bff6e46934ecca4626d787790782bb29-merged.mount: Deactivated successfully.
Jan 26 12:42:10 np0005596060 podman[88911]: 2026-01-26 17:42:10.790980667 +0000 UTC m=+1.349292949 container remove db767406afa3890be7718813a4c8d7f032c595009133f819e73bc6b02734c2bd (image=quay.io/ceph/ceph:v18, name=admiring_meitner, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 12:42:10 np0005596060 systemd[1]: libpod-conmon-db767406afa3890be7718813a4c8d7f032c595009133f819e73bc6b02734c2bd.scope: Deactivated successfully.
Jan 26 12:42:10 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 10 completed events
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:42:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:11 np0005596060 python3[88989]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:11 np0005596060 podman[88990]: 2026-01-26 17:42:11.143463668 +0000 UTC m=+0.045600678 container create 6907cb11e9d6d4c1c158685fc583969d0a1034de65fdf15ac2502537922dbe96 (image=quay.io/ceph/ceph:v18, name=amazing_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 26 12:42:11 np0005596060 systemd[1]: Started libpod-conmon-6907cb11e9d6d4c1c158685fc583969d0a1034de65fdf15ac2502537922dbe96.scope.
Jan 26 12:42:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1186b48a81d0414c8f99e0bcfcbc3ec5502539ca4838882c01a86efa9ac3564/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1186b48a81d0414c8f99e0bcfcbc3ec5502539ca4838882c01a86efa9ac3564/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:11 np0005596060 podman[88990]: 2026-01-26 17:42:11.123123146 +0000 UTC m=+0.025260176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:11 np0005596060 podman[88990]: 2026-01-26 17:42:11.229622888 +0000 UTC m=+0.131759908 container init 6907cb11e9d6d4c1c158685fc583969d0a1034de65fdf15ac2502537922dbe96 (image=quay.io/ceph/ceph:v18, name=amazing_galois, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:11 np0005596060 podman[88990]: 2026-01-26 17:42:11.23819576 +0000 UTC m=+0.140332770 container start 6907cb11e9d6d4c1c158685fc583969d0a1034de65fdf15ac2502537922dbe96 (image=quay.io/ceph/ceph:v18, name=amazing_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:42:11 np0005596060 podman[88990]: 2026-01-26 17:42:11.241792289 +0000 UTC m=+0.143929299 container attach 6907cb11e9d6d4c1c158685fc583969d0a1034de65fdf15ac2502537922dbe96 (image=quay.io/ceph/ceph:v18, name=amazing_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 12:42:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 26 12:42:11 np0005596060 ceph-mon[74267]: Deploying daemon osd.2 on compute-2
Jan 26 12:42:11 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/694454554' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 26 12:42:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Jan 26 12:42:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3523141357' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 26 12:42:12 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3523141357' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 26 12:42:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 26 12:42:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3523141357' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 26 12:42:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Jan 26 12:42:12 np0005596060 amazing_galois[89006]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 26 12:42:12 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Jan 26 12:42:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:12 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:12 np0005596060 systemd[1]: libpod-6907cb11e9d6d4c1c158685fc583969d0a1034de65fdf15ac2502537922dbe96.scope: Deactivated successfully.
Jan 26 12:42:12 np0005596060 podman[88990]: 2026-01-26 17:42:12.75754931 +0000 UTC m=+1.659686330 container died 6907cb11e9d6d4c1c158685fc583969d0a1034de65fdf15ac2502537922dbe96 (image=quay.io/ceph/ceph:v18, name=amazing_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 12:42:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v100: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:12 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a1186b48a81d0414c8f99e0bcfcbc3ec5502539ca4838882c01a86efa9ac3564-merged.mount: Deactivated successfully.
Jan 26 12:42:12 np0005596060 podman[88990]: 2026-01-26 17:42:12.799131428 +0000 UTC m=+1.701268438 container remove 6907cb11e9d6d4c1c158685fc583969d0a1034de65fdf15ac2502537922dbe96 (image=quay.io/ceph/ceph:v18, name=amazing_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 12:42:12 np0005596060 systemd[1]: libpod-conmon-6907cb11e9d6d4c1c158685fc583969d0a1034de65fdf15ac2502537922dbe96.scope: Deactivated successfully.
Jan 26 12:42:13 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 26 12:42:13 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 26 12:42:13 np0005596060 python3[89118]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:42:13 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3523141357' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 26 12:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:42:14 np0005596060 python3[89189]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769449333.5933588-37339-177047101174851/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:42:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:42:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:42:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v101: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:14 np0005596060 python3[89291]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:42:14 np0005596060 ceph-mon[74267]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 26 12:42:14 np0005596060 ceph-mon[74267]: Cluster is now healthy
Jan 26 12:42:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:15 np0005596060 python3[89366]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769449334.5058103-37353-169412135707793/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=285fedca87deffe521a7efd8ddad0e15edac4057 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:42:15 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.cchxrf started
Jan 26 12:42:15 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-2.cchxrf 192.168.122.102:0/3455788602; not ready for session (expect reconnect)
Jan 26 12:42:15 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Jan 26 12:42:15 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Jan 26 12:42:15 np0005596060 python3[89416]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:15 np0005596060 podman[89417]: 2026-01-26 17:42:15.66791345 +0000 UTC m=+0.052951290 container create 6f51fb53572c2b5a47896481232266f070fd0db46694f918e274934501bccf3c (image=quay.io/ceph/ceph:v18, name=infallible_poitras, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 12:42:15 np0005596060 systemd[1]: Started libpod-conmon-6f51fb53572c2b5a47896481232266f070fd0db46694f918e274934501bccf3c.scope.
Jan 26 12:42:15 np0005596060 podman[89417]: 2026-01-26 17:42:15.64649738 +0000 UTC m=+0.031535270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:15 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fcca87dbe00008c23d50cf0bc7c8cbdf169dedb65faba7fb01522a010fe21df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fcca87dbe00008c23d50cf0bc7c8cbdf169dedb65faba7fb01522a010fe21df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fcca87dbe00008c23d50cf0bc7c8cbdf169dedb65faba7fb01522a010fe21df/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:15 np0005596060 podman[89417]: 2026-01-26 17:42:15.769216472 +0000 UTC m=+0.154254312 container init 6f51fb53572c2b5a47896481232266f070fd0db46694f918e274934501bccf3c (image=quay.io/ceph/ceph:v18, name=infallible_poitras, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:15 np0005596060 podman[89417]: 2026-01-26 17:42:15.774890963 +0000 UTC m=+0.159928803 container start 6f51fb53572c2b5a47896481232266f070fd0db46694f918e274934501bccf3c (image=quay.io/ceph/ceph:v18, name=infallible_poitras, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 12:42:15 np0005596060 podman[89417]: 2026-01-26 17:42:15.778074431 +0000 UTC m=+0.163112291 container attach 6f51fb53572c2b5a47896481232266f070fd0db46694f918e274934501bccf3c (image=quay.io/ceph/ceph:v18, name=infallible_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:15 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.mbryrf(active, since 2m), standbys: compute-2.cchxrf
Jan 26 12:42:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.cchxrf", "id": "compute-2.cchxrf"} v 0) v1
Jan 26 12:42:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mgr metadata", "who": "compute-2.cchxrf", "id": "compute-2.cchxrf"}]: dispatch
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2401748601' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2401748601' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 26 12:42:16 np0005596060 infallible_poitras[89434]: 
Jan 26 12:42:16 np0005596060 infallible_poitras[89434]: [global]
Jan 26 12:42:16 np0005596060 infallible_poitras[89434]: #011fsid = d4cd1917-5876-51b6-bc64-65a16199754d
Jan 26 12:42:16 np0005596060 infallible_poitras[89434]: #011mon_host = 192.168.122.100
Jan 26 12:42:16 np0005596060 systemd[1]: libpod-6f51fb53572c2b5a47896481232266f070fd0db46694f918e274934501bccf3c.scope: Deactivated successfully.
Jan 26 12:42:16 np0005596060 podman[89417]: 2026-01-26 17:42:16.447449645 +0000 UTC m=+0.832487515 container died 6f51fb53572c2b5a47896481232266f070fd0db46694f918e274934501bccf3c (image=quay.io/ceph/ceph:v18, name=infallible_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 12:42:16 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4fcca87dbe00008c23d50cf0bc7c8cbdf169dedb65faba7fb01522a010fe21df-merged.mount: Deactivated successfully.
Jan 26 12:42:16 np0005596060 podman[89417]: 2026-01-26 17:42:16.493499803 +0000 UTC m=+0.878537643 container remove 6f51fb53572c2b5a47896481232266f070fd0db46694f918e274934501bccf3c (image=quay.io/ceph/ceph:v18, name=infallible_poitras, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:16 np0005596060 systemd[1]: libpod-conmon-6f51fb53572c2b5a47896481232266f070fd0db46694f918e274934501bccf3c.scope: Deactivated successfully.
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v102: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:16 np0005596060 python3[89497]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:16 np0005596060 podman[89521]: 2026-01-26 17:42:16.881077502 +0000 UTC m=+0.065324845 container create a76e6e996e6dcbb56c696e4f466421e7904ece2a2246dcedb99dcaed1e68841f (image=quay.io/ceph/ceph:v18, name=loving_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 12:42:16 np0005596060 systemd[1]: Started libpod-conmon-a76e6e996e6dcbb56c696e4f466421e7904ece2a2246dcedb99dcaed1e68841f.scope.
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Jan 26 12:42:16 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 26 12:42:16 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e34 create-or-move crush item name 'osd.2' initial_weight 0.0068 at location {host=compute-2,root=default}
Jan 26 12:42:16 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:16 np0005596060 podman[89521]: 2026-01-26 17:42:16.860294969 +0000 UTC m=+0.044542312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36de83a518f9653dc807e88a9cba419dd29b5ecb72222cb3928c8d7ced39ffb6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36de83a518f9653dc807e88a9cba419dd29b5ecb72222cb3928c8d7ced39ffb6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36de83a518f9653dc807e88a9cba419dd29b5ecb72222cb3928c8d7ced39ffb6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: from='osd.2 [v2:192.168.122.102:6800/815499186,v1:192.168.122.102:6801/815499186]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2401748601' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2401748601' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:16 np0005596060 podman[89521]: 2026-01-26 17:42:16.97446989 +0000 UTC m=+0.158717223 container init a76e6e996e6dcbb56c696e4f466421e7904ece2a2246dcedb99dcaed1e68841f (image=quay.io/ceph/ceph:v18, name=loving_lamport, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:16 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qpyzhk started
Jan 26 12:42:16 np0005596060 podman[89521]: 2026-01-26 17:42:16.982262303 +0000 UTC m=+0.166509616 container start a76e6e996e6dcbb56c696e4f466421e7904ece2a2246dcedb99dcaed1e68841f (image=quay.io/ceph/ceph:v18, name=loving_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:42:16 np0005596060 podman[89521]: 2026-01-26 17:42:16.985627626 +0000 UTC m=+0.169874949 container attach a76e6e996e6dcbb56c696e4f466421e7904ece2a2246dcedb99dcaed1e68841f (image=quay.io/ceph/ceph:v18, name=loving_lamport, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3160412710' entity='client.admin' 
Jan 26 12:42:17 np0005596060 loving_lamport[89563]: set ssl_option
Jan 26 12:42:17 np0005596060 systemd[1]: libpod-a76e6e996e6dcbb56c696e4f466421e7904ece2a2246dcedb99dcaed1e68841f.scope: Deactivated successfully.
Jan 26 12:42:17 np0005596060 podman[89521]: 2026-01-26 17:42:17.66794407 +0000 UTC m=+0.852191383 container died a76e6e996e6dcbb56c696e4f466421e7904ece2a2246dcedb99dcaed1e68841f (image=quay.io/ceph/ceph:v18, name=loving_lamport, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:17 np0005596060 systemd[1]: var-lib-containers-storage-overlay-36de83a518f9653dc807e88a9cba419dd29b5ecb72222cb3928c8d7ced39ffb6-merged.mount: Deactivated successfully.
Jan 26 12:42:17 np0005596060 podman[89521]: 2026-01-26 17:42:17.709569428 +0000 UTC m=+0.893816741 container remove a76e6e996e6dcbb56c696e4f466421e7904ece2a2246dcedb99dcaed1e68841f (image=quay.io/ceph/ceph:v18, name=loving_lamport, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 12:42:17 np0005596060 systemd[1]: libpod-conmon-a76e6e996e6dcbb56c696e4f466421e7904ece2a2246dcedb99dcaed1e68841f.scope: Deactivated successfully.
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 26 12:42:17 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:17 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:17 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/815499186; not ready for session (expect reconnect)
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:17 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: from='osd.2 [v2:192.168.122.102:6800/815499186,v1:192.168.122.102:6801/815499186]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/3160412710' entity='client.admin' 
Jan 26 12:42:17 np0005596060 ceph-mon[74267]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 26 12:42:18 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.mbryrf(active, since 2m), standbys: compute-2.cchxrf, compute-1.qpyzhk
Jan 26 12:42:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.qpyzhk", "id": "compute-1.qpyzhk"} v 0) v1
Jan 26 12:42:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qpyzhk", "id": "compute-1.qpyzhk"}]: dispatch
Jan 26 12:42:18 np0005596060 python3[89743]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:18 np0005596060 podman[89756]: 2026-01-26 17:42:18.133116646 +0000 UTC m=+0.024530677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:18 np0005596060 podman[89756]: 2026-01-26 17:42:18.242811118 +0000 UTC m=+0.134225169 container create 4a8598550461d45bafa5f5dc149c83c8121b816de963de0deebd343a3e94af87 (image=quay.io/ceph/ceph:v18, name=loving_poincare, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 12:42:18 np0005596060 systemd[1]: Started libpod-conmon-4a8598550461d45bafa5f5dc149c83c8121b816de963de0deebd343a3e94af87.scope.
Jan 26 12:42:18 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b208251ecf38b4eac57f44344fce59ce855bb50750034b7ef15da7532c35e6e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b208251ecf38b4eac57f44344fce59ce855bb50750034b7ef15da7532c35e6e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b208251ecf38b4eac57f44344fce59ce855bb50750034b7ef15da7532c35e6e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:18 np0005596060 podman[89756]: 2026-01-26 17:42:18.525551516 +0000 UTC m=+0.416965617 container init 4a8598550461d45bafa5f5dc149c83c8121b816de963de0deebd343a3e94af87 (image=quay.io/ceph/ceph:v18, name=loving_poincare, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:18 np0005596060 podman[89756]: 2026-01-26 17:42:18.535802999 +0000 UTC m=+0.427217040 container start 4a8598550461d45bafa5f5dc149c83c8121b816de963de0deebd343a3e94af87 (image=quay.io/ceph/ceph:v18, name=loving_poincare, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:42:18 np0005596060 podman[89756]: 2026-01-26 17:42:18.680862664 +0000 UTC m=+0.572276725 container attach 4a8598550461d45bafa5f5dc149c83c8121b816de963de0deebd343a3e94af87 (image=quay.io/ceph/ceph:v18, name=loving_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 12:42:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:42:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v105: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.13( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.645995140s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 78.222229004s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.056303024s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.632553101s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.15( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.646049500s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 78.222320557s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.13( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.645995140s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.222229004s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.056303024s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632553101s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.c( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.650124550s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 78.226615906s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.d( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.650163651s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 78.226654053s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.10( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.650485992s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 78.226974487s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.d( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.650163651s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226654053s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.c( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.650124550s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226615906s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.10( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.650485992s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226974487s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.a( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.650152206s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 78.226768494s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.a( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.650152206s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226768494s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=21/22 n=0 ec=16/16 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.485351562s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 77.062057495s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[3.0( empty local-lis/les=21/22 n=0 ec=16/16 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.485351562s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.062057495s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.15( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.646049500s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.222320557s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35 pruub=11.016263962s) [] r=-1 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 active pruub 78.593063354s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=35 pruub=11.016263962s) [] r=-1 lpr=35 pi=[19,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.593063354s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055410385s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.632316589s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055406570s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.632308960s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055410385s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632316589s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.484498024s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 77.061470032s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055406570s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632308960s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.484498024s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.061470032s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.1b( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.649960518s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 active pruub 78.226982117s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[2.1b( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=35 pruub=10.649960518s) [] r=-1 lpr=35 pi=[27,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226982117s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055432320s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.632530212s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055293083s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.632400513s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055432320s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632530212s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055293083s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632400513s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.468673706s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 77.045837402s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055253029s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.632423401s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055181503s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.632400513s@ mbc={}] start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=35 pruub=9.468673706s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.045837402s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055253029s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632423401s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 35 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=12.055181503s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632400513s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:18 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/815499186; not ready for session (expect reconnect)
Jan 26 12:42:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:18 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:19 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14283 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:42:19 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:19 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:19 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 26 12:42:19 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:19 np0005596060 loving_poincare[89772]: Scheduled rgw.rgw update...
Jan 26 12:42:19 np0005596060 loving_poincare[89772]: Scheduled ingress.rgw.default update...
Jan 26 12:42:19 np0005596060 systemd[1]: libpod-4a8598550461d45bafa5f5dc149c83c8121b816de963de0deebd343a3e94af87.scope: Deactivated successfully.
Jan 26 12:42:19 np0005596060 podman[89756]: 2026-01-26 17:42:19.17674906 +0000 UTC m=+1.068163111 container died 4a8598550461d45bafa5f5dc149c83c8121b816de963de0deebd343a3e94af87 (image=quay.io/ceph/ceph:v18, name=loving_poincare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:19 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2b208251ecf38b4eac57f44344fce59ce855bb50750034b7ef15da7532c35e6e-merged.mount: Deactivated successfully.
Jan 26 12:42:19 np0005596060 podman[89756]: 2026-01-26 17:42:19.220580033 +0000 UTC m=+1.111994044 container remove 4a8598550461d45bafa5f5dc149c83c8121b816de963de0deebd343a3e94af87 (image=quay.io/ceph/ceph:v18, name=loving_poincare, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 12:42:19 np0005596060 systemd[1]: libpod-conmon-4a8598550461d45bafa5f5dc149c83c8121b816de963de0deebd343a3e94af87.scope: Deactivated successfully.
Jan 26 12:42:19 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/815499186; not ready for session (expect reconnect)
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:19 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:20 np0005596060 ceph-mon[74267]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:20 np0005596060 ceph-mon[74267]: Saving service ingress.rgw.default spec with placement count:2
Jan 26 12:42:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:20 np0005596060 python3[89882]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:42:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v106: 100 pgs: 100 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 26 12:42:20 np0005596060 python3[89953]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769449340.2056432-37394-196897938311644/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:42:20 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/815499186; not ready for session (expect reconnect)
Jan 26 12:42:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:20 np0005596060 ceph-mgr[74563]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 26 12:42:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 26 12:42:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.425952911s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.222320557s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.430527687s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226974487s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.836084366s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632553101s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.425840378s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.222320557s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.836033821s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632553101s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.430449486s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226974487s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/815499186,v1:192.168.122.102:6801/815499186] boot
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.429782867s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226615906s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.429779053s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226654053s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.429749489s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226615906s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.425330162s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.222229004s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.429730415s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226654053s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.425271988s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.222229004s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=21/22 n=0 ec=16/16 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.264883041s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.062057495s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.429564476s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226768494s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[3.0( empty local-lis/les=21/22 n=0 ec=16/16 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.264846802s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.062057495s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.429514885s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226768494s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=36 pruub=8.795751572s) [2] r=-1 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.593063354s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[5.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=36 pruub=8.795720100s) [2] r=-1 lpr=36 pi=[19,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.593063354s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834776878s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632308960s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834839821s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632423401s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834733009s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632308960s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834815025s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632423401s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834654808s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632316589s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834613800s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632316589s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.263514519s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.061470032s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834423065s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632400513s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.428995132s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226982117s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[3.8( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.263484955s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.061470032s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834402084s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632400513s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=27/29 n=0 ec=21/14 lis/c=27/27 les/c/f=29/29/0 sis=36 pruub=8.428970337s) [2] r=-1 lpr=36 pi=[27,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 78.226982117s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834464073s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632530212s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.247708321s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.045837402s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834437370s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632530212s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834212303s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632400513s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[3.1b( empty local-lis/les=21/22 n=0 ec=21/16 lis/c=21/21 les/c/f=22/22/0 sis=36 pruub=7.247687817s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.045837402s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 36 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/17 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=9.834194183s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.632400513s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:42:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 26 12:42:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 26 12:42:21 np0005596060 ceph-mon[74267]: OSD bench result of 8119.114292 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 26 12:42:21 np0005596060 python3[90003]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:21 np0005596060 podman[90004]: 2026-01-26 17:42:21.583465811 +0000 UTC m=+0.069691693 container create 6c74f7c347d5621722ea3407150ad905880a692c72853e6bf44e081b368088ce (image=quay.io/ceph/ceph:v18, name=amazing_darwin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:21 np0005596060 systemd[1]: Started libpod-conmon-6c74f7c347d5621722ea3407150ad905880a692c72853e6bf44e081b368088ce.scope.
Jan 26 12:42:21 np0005596060 podman[90004]: 2026-01-26 17:42:21.557594012 +0000 UTC m=+0.043819924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:21 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a080e358e22c9824a7c9f8165e6c9d48f4444ebba5e637853f7d788ec9d562ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a080e358e22c9824a7c9f8165e6c9d48f4444ebba5e637853f7d788ec9d562ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a080e358e22c9824a7c9f8165e6c9d48f4444ebba5e637853f7d788ec9d562ff/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:21 np0005596060 podman[90004]: 2026-01-26 17:42:21.809038626 +0000 UTC m=+0.295264518 container init 6c74f7c347d5621722ea3407150ad905880a692c72853e6bf44e081b368088ce (image=quay.io/ceph/ceph:v18, name=amazing_darwin, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:21 np0005596060 podman[90004]: 2026-01-26 17:42:21.822353075 +0000 UTC m=+0.308578937 container start 6c74f7c347d5621722ea3407150ad905880a692c72853e6bf44e081b368088ce (image=quay.io/ceph/ceph:v18, name=amazing_darwin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:42:21 np0005596060 podman[90004]: 2026-01-26 17:42:21.996573571 +0000 UTC m=+0.482799523 container attach 6c74f7c347d5621722ea3407150ad905880a692c72853e6bf44e081b368088ce (image=quay.io/ceph/ceph:v18, name=amazing_darwin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: osd.2 [v2:192.168.122.102:6800/815499186,v1:192.168.122.102:6801/815499186] boot
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14289 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 26 12:42:22 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0[74263]: 2026-01-26T17:42:22.557+0000 7f2c6c857640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e2 new map
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-26T17:42:22.558304+0000#012modified#0112026-01-26T17:42:22.558341+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:42:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 26 12:42:22 np0005596060 systemd[1]: libpod-6c74f7c347d5621722ea3407150ad905880a692c72853e6bf44e081b368088ce.scope: Deactivated successfully.
Jan 26 12:42:22 np0005596060 podman[90004]: 2026-01-26 17:42:22.715832278 +0000 UTC m=+1.202058150 container died 6c74f7c347d5621722ea3407150ad905880a692c72853e6bf44e081b368088ce (image=quay.io/ceph/ceph:v18, name=amazing_darwin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 26 12:42:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a080e358e22c9824a7c9f8165e6c9d48f4444ebba5e637853f7d788ec9d562ff-merged.mount: Deactivated successfully.
Jan 26 12:42:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v110: 100 pgs: 36 peering, 64 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:22 np0005596060 podman[90004]: 2026-01-26 17:42:22.775908772 +0000 UTC m=+1.262134674 container remove 6c74f7c347d5621722ea3407150ad905880a692c72853e6bf44e081b368088ce (image=quay.io/ceph/ceph:v18, name=amazing_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 12:42:22 np0005596060 systemd[1]: libpod-conmon-6c74f7c347d5621722ea3407150ad905880a692c72853e6bf44e081b368088ce.scope: Deactivated successfully.
Jan 26 12:42:23 np0005596060 python3[90181]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:23 np0005596060 podman[90234]: 2026-01-26 17:42:23.183252609 +0000 UTC m=+0.057012830 container create decc61910100fcf18bcfe78e0e21d0f30fd4daf0b5c902b9757d43bd7da354f0 (image=quay.io/ceph/ceph:v18, name=festive_bassi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:23 np0005596060 systemd[1]: Started libpod-conmon-decc61910100fcf18bcfe78e0e21d0f30fd4daf0b5c902b9757d43bd7da354f0.scope.
Jan 26 12:42:23 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:23 np0005596060 podman[90234]: 2026-01-26 17:42:23.153541145 +0000 UTC m=+0.027301446 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5d26075bbacebe407872b5317d9fd4a16c89e1fe04d799997ce36b32c44ce6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5d26075bbacebe407872b5317d9fd4a16c89e1fe04d799997ce36b32c44ce6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a5d26075bbacebe407872b5317d9fd4a16c89e1fe04d799997ce36b32c44ce6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:23 np0005596060 podman[90234]: 2026-01-26 17:42:23.268824084 +0000 UTC m=+0.142584315 container init decc61910100fcf18bcfe78e0e21d0f30fd4daf0b5c902b9757d43bd7da354f0 (image=quay.io/ceph/ceph:v18, name=festive_bassi, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:23 np0005596060 podman[90234]: 2026-01-26 17:42:23.276828382 +0000 UTC m=+0.150588593 container start decc61910100fcf18bcfe78e0e21d0f30fd4daf0b5c902b9757d43bd7da354f0 (image=quay.io/ceph/ceph:v18, name=festive_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:42:23 np0005596060 podman[90234]: 2026-01-26 17:42:23.283798924 +0000 UTC m=+0.157559135 container attach decc61910100fcf18bcfe78e0e21d0f30fd4daf0b5c902b9757d43bd7da354f0 (image=quay.io/ceph/ceph:v18, name=festive_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:42:23 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14295 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 12:42:23 np0005596060 ceph-mgr[74563]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:23 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 26 12:42:23 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:42:23 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:42:24 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:42:24 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:42:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:24 np0005596060 festive_bassi[90280]: Scheduled mds.cephfs update...
Jan 26 12:42:24 np0005596060 systemd[1]: libpod-decc61910100fcf18bcfe78e0e21d0f30fd4daf0b5c902b9757d43bd7da354f0.scope: Deactivated successfully.
Jan 26 12:42:24 np0005596060 podman[90234]: 2026-01-26 17:42:24.153682603 +0000 UTC m=+1.027442844 container died decc61910100fcf18bcfe78e0e21d0f30fd4daf0b5c902b9757d43bd7da354f0 (image=quay.io/ceph/ceph:v18, name=festive_bassi, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 26 12:42:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-9a5d26075bbacebe407872b5317d9fd4a16c89e1fe04d799997ce36b32c44ce6-merged.mount: Deactivated successfully.
Jan 26 12:42:24 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:42:24 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:42:24 np0005596060 podman[90234]: 2026-01-26 17:42:24.212342783 +0000 UTC m=+1.086103004 container remove decc61910100fcf18bcfe78e0e21d0f30fd4daf0b5c902b9757d43bd7da354f0 (image=quay.io/ceph/ceph:v18, name=festive_bassi, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 12:42:24 np0005596060 systemd[1]: libpod-conmon-decc61910100fcf18bcfe78e0e21d0f30fd4daf0b5c902b9757d43bd7da354f0.scope: Deactivated successfully.
Jan 26 12:42:24 np0005596060 ceph-mon[74267]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:24 np0005596060 ceph-mon[74267]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 26 12:42:24 np0005596060 ceph-mon[74267]: Unable to set osd_memory_target on compute-2 to 134209126: error parsing value: Value '134209126' is below minimum 939524096
Jan 26 12:42:24 np0005596060 ceph-mon[74267]: Updating compute-0:/etc/ceph/ceph.conf
Jan 26 12:42:24 np0005596060 ceph-mon[74267]: Updating compute-1:/etc/ceph/ceph.conf
Jan 26 12:42:24 np0005596060 ceph-mon[74267]: Updating compute-2:/etc/ceph/ceph.conf
Jan 26 12:42:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v111: 100 pgs: 36 peering, 64 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: Updating compute-0:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: Updating compute-2:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: Updating compute-1:/var/lib/ceph/d4cd1917-5876-51b6-bc64-65a16199754d/config/ceph.conf
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:25 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 26 12:42:25 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 26 12:42:25 np0005596060 python3[91060]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:25 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev fd43163f-967c-4290-a8c2-40dc88668567 does not exist
Jan 26 12:42:25 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5df196f3-af39-480b-826d-fbd3ff4bd3b8 does not exist
Jan 26 12:42:25 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d242ab5c-f427-4ed3-bb16-4f04b6ba465d does not exist
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:25 np0005596060 python3[91181]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769449345.1758137-37446-11652213821265/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=395d1c083c7c30077cae22673689037cb8c534c6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:42:26 np0005596060 podman[91299]: 2026-01-26 17:42:26.308343186 +0000 UTC m=+0.046347027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:26 np0005596060 python3[91338]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v112: 100 pgs: 36 peering, 64 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:42:26 np0005596060 podman[91299]: 2026-01-26 17:42:26.982959808 +0000 UTC m=+0.720963609 container create 097131ea991218fdefd6739976d12fe0fae724a7c9d0984256a0d3f25034a183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:27 np0005596060 systemd[1]: Started libpod-conmon-097131ea991218fdefd6739976d12fe0fae724a7c9d0984256a0d3f25034a183.scope.
Jan 26 12:42:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:27 np0005596060 podman[91299]: 2026-01-26 17:42:27.172378249 +0000 UTC m=+0.910382020 container init 097131ea991218fdefd6739976d12fe0fae724a7c9d0984256a0d3f25034a183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:27 np0005596060 podman[91339]: 2026-01-26 17:42:27.175414704 +0000 UTC m=+0.588987157 container create 3e5e900409f8191ded89aef8491bf25af83439fb26e6691c15ad498b9a15b1cb (image=quay.io/ceph/ceph:v18, name=strange_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:27 np0005596060 podman[91299]: 2026-01-26 17:42:27.18455824 +0000 UTC m=+0.922562021 container start 097131ea991218fdefd6739976d12fe0fae724a7c9d0984256a0d3f25034a183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:42:27 np0005596060 podman[91299]: 2026-01-26 17:42:27.188544129 +0000 UTC m=+0.926547930 container attach 097131ea991218fdefd6739976d12fe0fae724a7c9d0984256a0d3f25034a183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 26 12:42:27 np0005596060 happy_wozniak[91353]: 167 167
Jan 26 12:42:27 np0005596060 systemd[1]: libpod-097131ea991218fdefd6739976d12fe0fae724a7c9d0984256a0d3f25034a183.scope: Deactivated successfully.
Jan 26 12:42:27 np0005596060 podman[91299]: 2026-01-26 17:42:27.193891181 +0000 UTC m=+0.931895002 container died 097131ea991218fdefd6739976d12fe0fae724a7c9d0984256a0d3f25034a183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:27 np0005596060 systemd[1]: Started libpod-conmon-3e5e900409f8191ded89aef8491bf25af83439fb26e6691c15ad498b9a15b1cb.scope.
Jan 26 12:42:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-bda94083c264c2c5ddce6229e4c5e5450c19b97943de252d88ad25cc9225a9db-merged.mount: Deactivated successfully.
Jan 26 12:42:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:27 np0005596060 podman[91339]: 2026-01-26 17:42:27.151016711 +0000 UTC m=+0.564589244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:27 np0005596060 podman[91299]: 2026-01-26 17:42:27.242865732 +0000 UTC m=+0.980869483 container remove 097131ea991218fdefd6739976d12fe0fae724a7c9d0984256a0d3f25034a183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0d94cc5983dbd90437064ee95523d518e9e8689cb20aa35fddafd81e7b6feb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0d94cc5983dbd90437064ee95523d518e9e8689cb20aa35fddafd81e7b6feb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:27 np0005596060 systemd[1]: libpod-conmon-097131ea991218fdefd6739976d12fe0fae724a7c9d0984256a0d3f25034a183.scope: Deactivated successfully.
Jan 26 12:42:27 np0005596060 podman[91339]: 2026-01-26 17:42:27.255644217 +0000 UTC m=+0.669216680 container init 3e5e900409f8191ded89aef8491bf25af83439fb26e6691c15ad498b9a15b1cb (image=quay.io/ceph/ceph:v18, name=strange_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 12:42:27 np0005596060 podman[91339]: 2026-01-26 17:42:27.263925272 +0000 UTC m=+0.677497725 container start 3e5e900409f8191ded89aef8491bf25af83439fb26e6691c15ad498b9a15b1cb (image=quay.io/ceph/ceph:v18, name=strange_elion, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:27 np0005596060 podman[91339]: 2026-01-26 17:42:27.267518591 +0000 UTC m=+0.681091064 container attach 3e5e900409f8191ded89aef8491bf25af83439fb26e6691c15ad498b9a15b1cb (image=quay.io/ceph/ceph:v18, name=strange_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 12:42:27 np0005596060 podman[91386]: 2026-01-26 17:42:27.419775004 +0000 UTC m=+0.043712152 container create 978b1362757129f9d55521afe7ae08ecfbe8724fa57d5c736fd85db160ff0e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:27 np0005596060 systemd[1]: Started libpod-conmon-978b1362757129f9d55521afe7ae08ecfbe8724fa57d5c736fd85db160ff0e89.scope.
Jan 26 12:42:27 np0005596060 podman[91386]: 2026-01-26 17:42:27.39737782 +0000 UTC m=+0.021314978 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c032bda0c6dae8b4125e21e3b27ef5c39c0ae9e695b293bcf156b4065d6fa9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c032bda0c6dae8b4125e21e3b27ef5c39c0ae9e695b293bcf156b4065d6fa9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c032bda0c6dae8b4125e21e3b27ef5c39c0ae9e695b293bcf156b4065d6fa9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c032bda0c6dae8b4125e21e3b27ef5c39c0ae9e695b293bcf156b4065d6fa9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c032bda0c6dae8b4125e21e3b27ef5c39c0ae9e695b293bcf156b4065d6fa9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:27 np0005596060 podman[91386]: 2026-01-26 17:42:27.80522631 +0000 UTC m=+0.429163508 container init 978b1362757129f9d55521afe7ae08ecfbe8724fa57d5c736fd85db160ff0e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:27 np0005596060 podman[91386]: 2026-01-26 17:42:27.823119172 +0000 UTC m=+0.447056370 container start 978b1362757129f9d55521afe7ae08ecfbe8724fa57d5c736fd85db160ff0e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 12:42:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Jan 26 12:42:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2224340308' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 26 12:42:28 np0005596060 podman[91386]: 2026-01-26 17:42:28.262635145 +0000 UTC m=+0.886572343 container attach 978b1362757129f9d55521afe7ae08ecfbe8724fa57d5c736fd85db160ff0e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_euler, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 12:42:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2224340308' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 26 12:42:28 np0005596060 systemd[1]: libpod-3e5e900409f8191ded89aef8491bf25af83439fb26e6691c15ad498b9a15b1cb.scope: Deactivated successfully.
Jan 26 12:42:28 np0005596060 podman[91339]: 2026-01-26 17:42:28.39879657 +0000 UTC m=+1.812369093 container died 3e5e900409f8191ded89aef8491bf25af83439fb26e6691c15ad498b9a15b1cb (image=quay.io/ceph/ceph:v18, name=strange_elion, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:42:28 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2224340308' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 26 12:42:28 np0005596060 strange_euler[91402]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:42:28 np0005596060 strange_euler[91402]: --> relative data size: 1.0
Jan 26 12:42:28 np0005596060 strange_euler[91402]: --> All data devices are unavailable
Jan 26 12:42:28 np0005596060 systemd[1]: libpod-978b1362757129f9d55521afe7ae08ecfbe8724fa57d5c736fd85db160ff0e89.scope: Deactivated successfully.
Jan 26 12:42:28 np0005596060 podman[91386]: 2026-01-26 17:42:28.612994034 +0000 UTC m=+1.236931192 container died 978b1362757129f9d55521afe7ae08ecfbe8724fa57d5c736fd85db160ff0e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_euler, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 12:42:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v113: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:29 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0c032bda0c6dae8b4125e21e3b27ef5c39c0ae9e695b293bcf156b4065d6fa9a-merged.mount: Deactivated successfully.
Jan 26 12:42:29 np0005596060 podman[91386]: 2026-01-26 17:42:29.334939897 +0000 UTC m=+1.958877055 container remove 978b1362757129f9d55521afe7ae08ecfbe8724fa57d5c736fd85db160ff0e89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:42:29 np0005596060 systemd[1]: var-lib-containers-storage-overlay-be0d94cc5983dbd90437064ee95523d518e9e8689cb20aa35fddafd81e7b6feb-merged.mount: Deactivated successfully.
Jan 26 12:42:29 np0005596060 podman[91339]: 2026-01-26 17:42:29.462384547 +0000 UTC m=+2.875956990 container remove 3e5e900409f8191ded89aef8491bf25af83439fb26e6691c15ad498b9a15b1cb (image=quay.io/ceph/ceph:v18, name=strange_elion, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 12:42:29 np0005596060 systemd[1]: libpod-conmon-3e5e900409f8191ded89aef8491bf25af83439fb26e6691c15ad498b9a15b1cb.scope: Deactivated successfully.
Jan 26 12:42:29 np0005596060 systemd[1]: libpod-conmon-978b1362757129f9d55521afe7ae08ecfbe8724fa57d5c736fd85db160ff0e89.scope: Deactivated successfully.
Jan 26 12:42:29 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/2224340308' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 26 12:42:30 np0005596060 podman[91603]: 2026-01-26 17:42:29.999844859 +0000 UTC m=+0.025156411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:30 np0005596060 podman[91603]: 2026-01-26 17:42:30.146025932 +0000 UTC m=+0.171337384 container create f2eaa21e6b71830e07318dfd7be6dd6103f4a2e91cf94d7e1c2b1c7aa13595f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:30 np0005596060 systemd[1]: Started libpod-conmon-f2eaa21e6b71830e07318dfd7be6dd6103f4a2e91cf94d7e1c2b1c7aa13595f7.scope.
Jan 26 12:42:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:30 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:30 np0005596060 podman[91603]: 2026-01-26 17:42:30.231756051 +0000 UTC m=+0.257067533 container init f2eaa21e6b71830e07318dfd7be6dd6103f4a2e91cf94d7e1c2b1c7aa13595f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:30 np0005596060 podman[91603]: 2026-01-26 17:42:30.243797399 +0000 UTC m=+0.269108891 container start f2eaa21e6b71830e07318dfd7be6dd6103f4a2e91cf94d7e1c2b1c7aa13595f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 12:42:30 np0005596060 sharp_maxwell[91645]: 167 167
Jan 26 12:42:30 np0005596060 systemd[1]: libpod-f2eaa21e6b71830e07318dfd7be6dd6103f4a2e91cf94d7e1c2b1c7aa13595f7.scope: Deactivated successfully.
Jan 26 12:42:30 np0005596060 python3[91642]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:30 np0005596060 podman[91603]: 2026-01-26 17:42:30.386748422 +0000 UTC m=+0.412059904 container attach f2eaa21e6b71830e07318dfd7be6dd6103f4a2e91cf94d7e1c2b1c7aa13595f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:30 np0005596060 podman[91603]: 2026-01-26 17:42:30.389810797 +0000 UTC m=+0.415122289 container died f2eaa21e6b71830e07318dfd7be6dd6103f4a2e91cf94d7e1c2b1c7aa13595f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 26 12:42:30 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 26 12:42:30 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 26 12:42:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v114: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:30 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2b16741f7b3e11ef42e55929340c04fbe065cc550985b7c877665e2c683a6699-merged.mount: Deactivated successfully.
Jan 26 12:42:31 np0005596060 podman[91603]: 2026-01-26 17:42:31.147964395 +0000 UTC m=+1.173275877 container remove f2eaa21e6b71830e07318dfd7be6dd6103f4a2e91cf94d7e1c2b1c7aa13595f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 12:42:31 np0005596060 systemd[1]: libpod-conmon-f2eaa21e6b71830e07318dfd7be6dd6103f4a2e91cf94d7e1c2b1c7aa13595f7.scope: Deactivated successfully.
Jan 26 12:42:31 np0005596060 podman[91663]: 2026-01-26 17:42:31.214022318 +0000 UTC m=+0.873291635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:31 np0005596060 podman[91663]: 2026-01-26 17:42:31.313710901 +0000 UTC m=+0.972980198 container create f531d9b9e3cc842ac5aaf4fc1b3662fcc7762ca6f738fbdcf553bbcdb2607d18 (image=quay.io/ceph/ceph:v18, name=mystifying_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:31 np0005596060 systemd[1]: Started libpod-conmon-f531d9b9e3cc842ac5aaf4fc1b3662fcc7762ca6f738fbdcf553bbcdb2607d18.scope.
Jan 26 12:42:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a47555d1cf52b5b747304af546ab61fe189650792e62c77f9ce04a97c9219c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a47555d1cf52b5b747304af546ab61fe189650792e62c77f9ce04a97c9219c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:31 np0005596060 podman[91687]: 2026-01-26 17:42:31.449720523 +0000 UTC m=+0.149942407 container create d6ba60060ab2a02a3561ef91d7cc01b6b321760837123f32dfe4c79d1c82500b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 12:42:31 np0005596060 podman[91663]: 2026-01-26 17:42:31.46377085 +0000 UTC m=+1.123040177 container init f531d9b9e3cc842ac5aaf4fc1b3662fcc7762ca6f738fbdcf553bbcdb2607d18 (image=quay.io/ceph/ceph:v18, name=mystifying_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:31 np0005596060 podman[91663]: 2026-01-26 17:42:31.469927752 +0000 UTC m=+1.129197049 container start f531d9b9e3cc842ac5aaf4fc1b3662fcc7762ca6f738fbdcf553bbcdb2607d18 (image=quay.io/ceph/ceph:v18, name=mystifying_jennings, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:31 np0005596060 podman[91687]: 2026-01-26 17:42:31.377420566 +0000 UTC m=+0.077642480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:31 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 26 12:42:31 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 26 12:42:31 np0005596060 systemd[1]: Started libpod-conmon-d6ba60060ab2a02a3561ef91d7cc01b6b321760837123f32dfe4c79d1c82500b.scope.
Jan 26 12:42:31 np0005596060 podman[91663]: 2026-01-26 17:42:31.522948203 +0000 UTC m=+1.182217560 container attach f531d9b9e3cc842ac5aaf4fc1b3662fcc7762ca6f738fbdcf553bbcdb2607d18 (image=quay.io/ceph/ceph:v18, name=mystifying_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82d96689582aa18955dd33151d8e001d81282f8579d346101ce4e681d221b60/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82d96689582aa18955dd33151d8e001d81282f8579d346101ce4e681d221b60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82d96689582aa18955dd33151d8e001d81282f8579d346101ce4e681d221b60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d82d96689582aa18955dd33151d8e001d81282f8579d346101ce4e681d221b60/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:31 np0005596060 podman[91687]: 2026-01-26 17:42:31.57137692 +0000 UTC m=+0.271598864 container init d6ba60060ab2a02a3561ef91d7cc01b6b321760837123f32dfe4c79d1c82500b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:31 np0005596060 podman[91687]: 2026-01-26 17:42:31.577365368 +0000 UTC m=+0.277587282 container start d6ba60060ab2a02a3561ef91d7cc01b6b321760837123f32dfe4c79d1c82500b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 12:42:31 np0005596060 podman[91687]: 2026-01-26 17:42:31.600894049 +0000 UTC m=+0.301115943 container attach d6ba60060ab2a02a3561ef91d7cc01b6b321760837123f32dfe4c79d1c82500b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 26 12:42:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672764225' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 12:42:32 np0005596060 mystifying_jennings[91703]: 
Jan 26 12:42:32 np0005596060 mystifying_jennings[91703]: {"fsid":"d4cd1917-5876-51b6-bc64-65a16199754d","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":36,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":38,"num_osds":3,"num_up_osds":3,"osd_up_since":1769449341,"num_in_osds":3,"osd_in_since":1769449323,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":100}],"num_pgs":100,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84045824,"bytes_avail":22451949568,"bytes_total":22535995392},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-26T17:42:22.773224+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.qpyzhk":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.cchxrf":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Jan 26 12:42:32 np0005596060 systemd[1]: libpod-f531d9b9e3cc842ac5aaf4fc1b3662fcc7762ca6f738fbdcf553bbcdb2607d18.scope: Deactivated successfully.
Jan 26 12:42:32 np0005596060 podman[91736]: 2026-01-26 17:42:32.195590347 +0000 UTC m=+0.036593095 container died f531d9b9e3cc842ac5aaf4fc1b3662fcc7762ca6f738fbdcf553bbcdb2607d18 (image=quay.io/ceph/ceph:v18, name=mystifying_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 12:42:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2a47555d1cf52b5b747304af546ab61fe189650792e62c77f9ce04a97c9219c7-merged.mount: Deactivated successfully.
Jan 26 12:42:32 np0005596060 festive_edison[91710]: {
Jan 26 12:42:32 np0005596060 festive_edison[91710]:    "1": [
Jan 26 12:42:32 np0005596060 festive_edison[91710]:        {
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "devices": [
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "/dev/loop3"
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            ],
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "lv_name": "ceph_lv0",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "lv_size": "7511998464",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "name": "ceph_lv0",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "tags": {
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.cluster_name": "ceph",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.crush_device_class": "",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.encrypted": "0",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.osd_id": "1",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.type": "block",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:                "ceph.vdo": "0"
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            },
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "type": "block",
Jan 26 12:42:32 np0005596060 festive_edison[91710]:            "vg_name": "ceph_vg0"
Jan 26 12:42:32 np0005596060 festive_edison[91710]:        }
Jan 26 12:42:32 np0005596060 festive_edison[91710]:    ]
Jan 26 12:42:32 np0005596060 festive_edison[91710]: }
Jan 26 12:42:32 np0005596060 podman[91736]: 2026-01-26 17:42:32.341397171 +0000 UTC m=+0.182399829 container remove f531d9b9e3cc842ac5aaf4fc1b3662fcc7762ca6f738fbdcf553bbcdb2607d18 (image=quay.io/ceph/ceph:v18, name=mystifying_jennings, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 12:42:32 np0005596060 systemd[1]: libpod-conmon-f531d9b9e3cc842ac5aaf4fc1b3662fcc7762ca6f738fbdcf553bbcdb2607d18.scope: Deactivated successfully.
Jan 26 12:42:32 np0005596060 systemd[1]: libpod-d6ba60060ab2a02a3561ef91d7cc01b6b321760837123f32dfe4c79d1c82500b.scope: Deactivated successfully.
Jan 26 12:42:32 np0005596060 podman[91687]: 2026-01-26 17:42:32.380827275 +0000 UTC m=+1.081049149 container died d6ba60060ab2a02a3561ef91d7cc01b6b321760837123f32dfe4c79d1c82500b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d82d96689582aa18955dd33151d8e001d81282f8579d346101ce4e681d221b60-merged.mount: Deactivated successfully.
Jan 26 12:42:32 np0005596060 python3[91792]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:32 np0005596060 podman[91687]: 2026-01-26 17:42:32.683624519 +0000 UTC m=+1.383846423 container remove d6ba60060ab2a02a3561ef91d7cc01b6b321760837123f32dfe4c79d1c82500b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 12:42:32 np0005596060 systemd[1]: libpod-conmon-d6ba60060ab2a02a3561ef91d7cc01b6b321760837123f32dfe4c79d1c82500b.scope: Deactivated successfully.
Jan 26 12:42:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v115: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:32 np0005596060 podman[91794]: 2026-01-26 17:42:32.739431498 +0000 UTC m=+0.042018629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:32 np0005596060 podman[91794]: 2026-01-26 17:42:32.880609817 +0000 UTC m=+0.183196928 container create 82f42c8e9ca8e9ce11c59c138c7fb4d014c06ab21078e2efab230ce2c314bdc6 (image=quay.io/ceph/ceph:v18, name=gallant_greider, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:42:32 np0005596060 systemd[1]: Started libpod-conmon-82f42c8e9ca8e9ce11c59c138c7fb4d014c06ab21078e2efab230ce2c314bdc6.scope.
Jan 26 12:42:32 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d853444592ad9069773677c792394f6e70607c7c03a779587538ca85dd0c2d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d853444592ad9069773677c792394f6e70607c7c03a779587538ca85dd0c2d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:32 np0005596060 podman[91794]: 2026-01-26 17:42:32.980511797 +0000 UTC m=+0.283098948 container init 82f42c8e9ca8e9ce11c59c138c7fb4d014c06ab21078e2efab230ce2c314bdc6 (image=quay.io/ceph/ceph:v18, name=gallant_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 26 12:42:32 np0005596060 podman[91794]: 2026-01-26 17:42:32.992962374 +0000 UTC m=+0.295549485 container start 82f42c8e9ca8e9ce11c59c138c7fb4d014c06ab21078e2efab230ce2c314bdc6 (image=quay.io/ceph/ceph:v18, name=gallant_greider, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:32 np0005596060 podman[91794]: 2026-01-26 17:42:32.996372269 +0000 UTC m=+0.298959380 container attach 82f42c8e9ca8e9ce11c59c138c7fb4d014c06ab21078e2efab230ce2c314bdc6 (image=quay.io/ceph/ceph:v18, name=gallant_greider, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:33 np0005596060 podman[91972]: 2026-01-26 17:42:33.485345284 +0000 UTC m=+0.072236577 container create 4c368bbe9e19e04b453e5c688d28e34174a307a2293a4aab9f9d30a1b1df41f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:33 np0005596060 systemd[1]: Started libpod-conmon-4c368bbe9e19e04b453e5c688d28e34174a307a2293a4aab9f9d30a1b1df41f1.scope.
Jan 26 12:42:33 np0005596060 podman[91972]: 2026-01-26 17:42:33.444688479 +0000 UTC m=+0.031579762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:33 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:33 np0005596060 podman[91972]: 2026-01-26 17:42:33.575357547 +0000 UTC m=+0.162248870 container init 4c368bbe9e19e04b453e5c688d28e34174a307a2293a4aab9f9d30a1b1df41f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 12:42:33 np0005596060 podman[91972]: 2026-01-26 17:42:33.58073423 +0000 UTC m=+0.167625483 container start 4c368bbe9e19e04b453e5c688d28e34174a307a2293a4aab9f9d30a1b1df41f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:33 np0005596060 hardcore_mclean[91988]: 167 167
Jan 26 12:42:33 np0005596060 systemd[1]: libpod-4c368bbe9e19e04b453e5c688d28e34174a307a2293a4aab9f9d30a1b1df41f1.scope: Deactivated successfully.
Jan 26 12:42:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 12:42:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3074442636' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 12:42:33 np0005596060 gallant_greider[91858]: 
Jan 26 12:42:33 np0005596060 gallant_greider[91858]: {"epoch":3,"fsid":"d4cd1917-5876-51b6-bc64-65a16199754d","modified":"2026-01-26T17:41:50.087961Z","created":"2026-01-26T17:38:49.582225Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 26 12:42:33 np0005596060 gallant_greider[91858]: dumped monmap epoch 3
Jan 26 12:42:33 np0005596060 systemd[1]: libpod-82f42c8e9ca8e9ce11c59c138c7fb4d014c06ab21078e2efab230ce2c314bdc6.scope: Deactivated successfully.
Jan 26 12:42:33 np0005596060 podman[91972]: 2026-01-26 17:42:33.718319541 +0000 UTC m=+0.305210874 container attach 4c368bbe9e19e04b453e5c688d28e34174a307a2293a4aab9f9d30a1b1df41f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:33 np0005596060 podman[91794]: 2026-01-26 17:42:33.719667294 +0000 UTC m=+1.022254415 container died 82f42c8e9ca8e9ce11c59c138c7fb4d014c06ab21078e2efab230ce2c314bdc6 (image=quay.io/ceph/ceph:v18, name=gallant_greider, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 12:42:33 np0005596060 podman[91972]: 2026-01-26 17:42:33.719745866 +0000 UTC m=+0.306637119 container died 4c368bbe9e19e04b453e5c688d28e34174a307a2293a4aab9f9d30a1b1df41f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:34 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 26 12:42:34 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 26 12:42:34 np0005596060 systemd[1]: var-lib-containers-storage-overlay-30fdfb469c0fe062659e67c266bd51ac23a93a8c45e1f61c184700b9af1e5305-merged.mount: Deactivated successfully.
Jan 26 12:42:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v116: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:35 np0005596060 podman[91972]: 2026-01-26 17:42:35.150055224 +0000 UTC m=+1.736946477 container remove 4c368bbe9e19e04b453e5c688d28e34174a307a2293a4aab9f9d30a1b1df41f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mclean, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:35 np0005596060 systemd[1]: libpod-conmon-4c368bbe9e19e04b453e5c688d28e34174a307a2293a4aab9f9d30a1b1df41f1.scope: Deactivated successfully.
Jan 26 12:42:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:35 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6d853444592ad9069773677c792394f6e70607c7c03a779587538ca85dd0c2d5-merged.mount: Deactivated successfully.
Jan 26 12:42:35 np0005596060 podman[91794]: 2026-01-26 17:42:35.681968412 +0000 UTC m=+2.984555543 container remove 82f42c8e9ca8e9ce11c59c138c7fb4d014c06ab21078e2efab230ce2c314bdc6 (image=quay.io/ceph/ceph:v18, name=gallant_greider, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:35 np0005596060 systemd[1]: libpod-conmon-82f42c8e9ca8e9ce11c59c138c7fb4d014c06ab21078e2efab230ce2c314bdc6.scope: Deactivated successfully.
Jan 26 12:42:35 np0005596060 podman[92027]: 2026-01-26 17:42:35.727195115 +0000 UTC m=+0.457756880 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:36 np0005596060 podman[92027]: 2026-01-26 17:42:36.357053013 +0000 UTC m=+1.087614728 container create f68bf94e30ffc16a80f6a40bf3f5c4b09200cd2d1ef565f21dcfde5d86744925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:36 np0005596060 python3[92066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:36 np0005596060 systemd[1]: Started libpod-conmon-f68bf94e30ffc16a80f6a40bf3f5c4b09200cd2d1ef565f21dcfde5d86744925.scope.
Jan 26 12:42:36 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 26 12:42:36 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 26 12:42:36 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1454a8db341691f43221311a964042d2989fbd181ced81a783d79cfb831b8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1454a8db341691f43221311a964042d2989fbd181ced81a783d79cfb831b8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1454a8db341691f43221311a964042d2989fbd181ced81a783d79cfb831b8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1454a8db341691f43221311a964042d2989fbd181ced81a783d79cfb831b8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:36 np0005596060 podman[92068]: 2026-01-26 17:42:36.426609649 +0000 UTC m=+0.027147105 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:36 np0005596060 podman[92068]: 2026-01-26 17:42:36.609799875 +0000 UTC m=+0.210337351 container create bb997713cad2cdc933657a5e04659083d72be0114636ceb730bf52f9e367559c (image=quay.io/ceph/ceph:v18, name=keen_shockley, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:36 np0005596060 podman[92027]: 2026-01-26 17:42:36.752841283 +0000 UTC m=+1.483403048 container init f68bf94e30ffc16a80f6a40bf3f5c4b09200cd2d1ef565f21dcfde5d86744925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 12:42:36 np0005596060 podman[92027]: 2026-01-26 17:42:36.761425376 +0000 UTC m=+1.491987091 container start f68bf94e30ffc16a80f6a40bf3f5c4b09200cd2d1ef565f21dcfde5d86744925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nightingale, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v117: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:36 np0005596060 podman[92027]: 2026-01-26 17:42:36.907925401 +0000 UTC m=+1.638487206 container attach f68bf94e30ffc16a80f6a40bf3f5c4b09200cd2d1ef565f21dcfde5d86744925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:37 np0005596060 systemd[1]: Started libpod-conmon-bb997713cad2cdc933657a5e04659083d72be0114636ceb730bf52f9e367559c.scope.
Jan 26 12:42:37 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d7378c7d3544526d3d4729ca1fe8ce96713604deb571cbdfb428e3f9794436/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30d7378c7d3544526d3d4729ca1fe8ce96713604deb571cbdfb428e3f9794436/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:37 np0005596060 podman[92068]: 2026-01-26 17:42:37.240915852 +0000 UTC m=+0.841453348 container init bb997713cad2cdc933657a5e04659083d72be0114636ceb730bf52f9e367559c (image=quay.io/ceph/ceph:v18, name=keen_shockley, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:37 np0005596060 podman[92068]: 2026-01-26 17:42:37.252447729 +0000 UTC m=+0.852985155 container start bb997713cad2cdc933657a5e04659083d72be0114636ceb730bf52f9e367559c (image=quay.io/ceph/ceph:v18, name=keen_shockley, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:37 np0005596060 podman[92068]: 2026-01-26 17:42:37.258826377 +0000 UTC m=+0.859363913 container attach bb997713cad2cdc933657a5e04659083d72be0114636ceb730bf52f9e367559c (image=quay.io/ceph/ceph:v18, name=keen_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:37 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 26 12:42:37 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 26 12:42:37 np0005596060 hardcore_nightingale[92083]: {
Jan 26 12:42:37 np0005596060 hardcore_nightingale[92083]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:42:37 np0005596060 hardcore_nightingale[92083]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:42:37 np0005596060 hardcore_nightingale[92083]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:42:37 np0005596060 hardcore_nightingale[92083]:        "osd_id": 1,
Jan 26 12:42:37 np0005596060 hardcore_nightingale[92083]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:42:37 np0005596060 hardcore_nightingale[92083]:        "type": "bluestore"
Jan 26 12:42:37 np0005596060 hardcore_nightingale[92083]:    }
Jan 26 12:42:37 np0005596060 hardcore_nightingale[92083]: }
Jan 26 12:42:37 np0005596060 systemd[1]: libpod-f68bf94e30ffc16a80f6a40bf3f5c4b09200cd2d1ef565f21dcfde5d86744925.scope: Deactivated successfully.
Jan 26 12:42:37 np0005596060 podman[92027]: 2026-01-26 17:42:37.681954996 +0000 UTC m=+2.412516711 container died f68bf94e30ffc16a80f6a40bf3f5c4b09200cd2d1ef565f21dcfde5d86744925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:37 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ff1454a8db341691f43221311a964042d2989fbd181ced81a783d79cfb831b8e-merged.mount: Deactivated successfully.
Jan 26 12:42:37 np0005596060 podman[92027]: 2026-01-26 17:42:37.744337594 +0000 UTC m=+2.474899309 container remove f68bf94e30ffc16a80f6a40bf3f5c4b09200cd2d1ef565f21dcfde5d86744925 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 26 12:42:37 np0005596060 systemd[1]: libpod-conmon-f68bf94e30ffc16a80f6a40bf3f5c4b09200cd2d1ef565f21dcfde5d86744925.scope: Deactivated successfully.
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:37 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 73e9ab06-5e0f-418e-b52d-652c1c5662d3 (Updating rgw.rgw deployment (+3 -> 3))
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.vncnzm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.vncnzm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.vncnzm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:37 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.vncnzm on compute-2
Jan 26 12:42:37 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.vncnzm on compute-2
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Jan 26 12:42:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1867336082' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 26 12:42:37 np0005596060 keen_shockley[92090]: [client.openstack]
Jan 26 12:42:37 np0005596060 keen_shockley[92090]: #011key = AQCMpndpAAAAABAAeoO/qaM5nFbVydjhvQD2lg==
Jan 26 12:42:37 np0005596060 keen_shockley[92090]: #011caps mgr = "allow *"
Jan 26 12:42:37 np0005596060 keen_shockley[92090]: #011caps mon = "profile rbd"
Jan 26 12:42:37 np0005596060 keen_shockley[92090]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 26 12:42:37 np0005596060 systemd[1]: libpod-bb997713cad2cdc933657a5e04659083d72be0114636ceb730bf52f9e367559c.scope: Deactivated successfully.
Jan 26 12:42:37 np0005596060 podman[92068]: 2026-01-26 17:42:37.959136723 +0000 UTC m=+1.559674159 container died bb997713cad2cdc933657a5e04659083d72be0114636ceb730bf52f9e367559c (image=quay.io/ceph/ceph:v18, name=keen_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:37 np0005596060 systemd[1]: var-lib-containers-storage-overlay-30d7378c7d3544526d3d4729ca1fe8ce96713604deb571cbdfb428e3f9794436-merged.mount: Deactivated successfully.
Jan 26 12:42:38 np0005596060 podman[92068]: 2026-01-26 17:42:38.007305588 +0000 UTC m=+1.607843024 container remove bb997713cad2cdc933657a5e04659083d72be0114636ceb730bf52f9e367559c (image=quay.io/ceph/ceph:v18, name=keen_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 12:42:38 np0005596060 systemd[1]: libpod-conmon-bb997713cad2cdc933657a5e04659083d72be0114636ceb730bf52f9e367559c.scope: Deactivated successfully.
Jan 26 12:42:38 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 26 12:42:38 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 26 12:42:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v118: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.vncnzm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 12:42:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.vncnzm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 12:42:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:38 np0005596060 ceph-mon[74267]: Deploying daemon rgw.rgw.compute-2.vncnzm on compute-2
Jan 26 12:42:38 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1867336082' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 26 12:42:39 np0005596060 ansible-async_wrapper.py[92301]: Invoked with j250548617547 30 /home/zuul/.ansible/tmp/ansible-tmp-1769449359.1519365-37520-135612512960886/AnsiballZ_command.py _
Jan 26 12:42:39 np0005596060 ansible-async_wrapper.py[92304]: Starting module and watcher
Jan 26 12:42:39 np0005596060 ansible-async_wrapper.py[92304]: Start watching 92305 (30)
Jan 26 12:42:39 np0005596060 ansible-async_wrapper.py[92305]: Start module (92305)
Jan 26 12:42:39 np0005596060 ansible-async_wrapper.py[92301]: Return async_wrapper task started.
Jan 26 12:42:39 np0005596060 python3[92306]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:39 np0005596060 podman[92307]: 2026-01-26 17:42:39.998890934 +0000 UTC m=+0.047903800 container create 58a3e366238d3374246e799842c9f35c1c1fa22d58484d7190eb1a962d137ec8 (image=quay.io/ceph/ceph:v18, name=unruffled_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:40 np0005596060 systemd[1]: Started libpod-conmon-58a3e366238d3374246e799842c9f35c1c1fa22d58484d7190eb1a962d137ec8.scope.
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:42:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:40 np0005596060 podman[92307]: 2026-01-26 17:42:39.981759399 +0000 UTC m=+0.030772285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d937e13f7fa0e02fbde5fce694a79c79bcf25b8b1e5b5cfe7ccaeacd61d47/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/167d937e13f7fa0e02fbde5fce694a79c79bcf25b8b1e5b5cfe7ccaeacd61d47/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 26 12:42:40 np0005596060 podman[92307]: 2026-01-26 17:42:40.099529541 +0000 UTC m=+0.148542417 container init 58a3e366238d3374246e799842c9f35c1c1fa22d58484d7190eb1a962d137ec8 (image=quay.io/ceph/ceph:v18, name=unruffled_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:40 np0005596060 podman[92307]: 2026-01-26 17:42:40.106041822 +0000 UTC m=+0.155054688 container start 58a3e366238d3374246e799842c9f35c1c1fa22d58484d7190eb1a962d137ec8 (image=quay.io/ceph/ceph:v18, name=unruffled_lumiere, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:40 np0005596060 podman[92307]: 2026-01-26 17:42:40.109423216 +0000 UTC m=+0.158436082 container attach 58a3e366238d3374246e799842c9f35c1c1fa22d58484d7190eb1a962d137ec8 (image=quay.io/ceph/ceph:v18, name=unruffled_lumiere, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dudysi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dudysi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dudysi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:40 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.dudysi on compute-1
Jan 26 12:42:40 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.dudysi on compute-1
Jan 26 12:42:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:40 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14328 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 12:42:40 np0005596060 unruffled_lumiere[92323]: 
Jan 26 12:42:40 np0005596060 unruffled_lumiere[92323]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 26 12:42:40 np0005596060 systemd[1]: libpod-58a3e366238d3374246e799842c9f35c1c1fa22d58484d7190eb1a962d137ec8.scope: Deactivated successfully.
Jan 26 12:42:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v119: 100 pgs: 100 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:40 np0005596060 podman[92350]: 2026-01-26 17:42:40.79235226 +0000 UTC m=+0.025422721 container died 58a3e366238d3374246e799842c9f35c1c1fa22d58484d7190eb1a962d137ec8 (image=quay.io/ceph/ceph:v18, name=unruffled_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 12:42:40 np0005596060 systemd[1]: var-lib-containers-storage-overlay-167d937e13f7fa0e02fbde5fce694a79c79bcf25b8b1e5b5cfe7ccaeacd61d47-merged.mount: Deactivated successfully.
Jan 26 12:42:40 np0005596060 podman[92350]: 2026-01-26 17:42:40.836108116 +0000 UTC m=+0.069178547 container remove 58a3e366238d3374246e799842c9f35c1c1fa22d58484d7190eb1a962d137ec8 (image=quay.io/ceph/ceph:v18, name=unruffled_lumiere, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 12:42:40 np0005596060 systemd[1]: libpod-conmon-58a3e366238d3374246e799842c9f35c1c1fa22d58484d7190eb1a962d137ec8.scope: Deactivated successfully.
Jan 26 12:42:40 np0005596060 ansible-async_wrapper.py[92305]: Module complete (92305)
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dudysi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.dudysi", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: Deploying daemon rgw.rgw.compute-1.dudysi on compute-1
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 26 12:42:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 39 pg[8.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 26 12:42:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 26 12:42:41 np0005596060 python3[92414]: ansible-ansible.legacy.async_status Invoked with jid=j250548617547.92301 mode=status _async_dir=/root/.ansible_async
Jan 26 12:42:41 np0005596060 python3[92463]: ansible-ansible.legacy.async_status Invoked with jid=j250548617547.92301 mode=cleanup _async_dir=/root/.ansible_async
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.102:0/882037292' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 26 12:42:42 np0005596060 python3[92489]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:42 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 40 pg[8.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:42 np0005596060 podman[92490]: 2026-01-26 17:42:42.227832108 +0000 UTC m=+0.059099808 container create e079bad09cdc73b3c34136268ac257523c7c39996366950f0c8607c0ebae6ca8 (image=quay.io/ceph/ceph:v18, name=charming_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:42:42 np0005596060 systemd[1]: Started libpod-conmon-e079bad09cdc73b3c34136268ac257523c7c39996366950f0c8607c0ebae6ca8.scope.
Jan 26 12:42:42 np0005596060 podman[92490]: 2026-01-26 17:42:42.20416373 +0000 UTC m=+0.035431440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:42:42 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241926aa7104b4c24eb81493a3725e25c2407ff5aebf18ab61d9f81d9df3c533/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/241926aa7104b4c24eb81493a3725e25c2407ff5aebf18ab61d9f81d9df3c533/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:42:42 np0005596060 podman[92490]: 2026-01-26 17:42:42.3310961 +0000 UTC m=+0.162363880 container init e079bad09cdc73b3c34136268ac257523c7c39996366950f0c8607c0ebae6ca8 (image=quay.io/ceph/ceph:v18, name=charming_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 12:42:42 np0005596060 podman[92490]: 2026-01-26 17:42:42.337579461 +0000 UTC m=+0.168847161 container start e079bad09cdc73b3c34136268ac257523c7c39996366950f0c8607c0ebae6ca8 (image=quay.io/ceph/ceph:v18, name=charming_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 12:42:42 np0005596060 podman[92490]: 2026-01-26 17:42:42.347109647 +0000 UTC m=+0.178377347 container attach e079bad09cdc73b3c34136268ac257523c7c39996366950f0c8607c0ebae6ca8 (image=quay.io/ceph/ceph:v18, name=charming_stonebraker, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zjkivk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zjkivk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zjkivk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:42 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.zjkivk on compute-0
Jan 26 12:42:42 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.zjkivk on compute-0
Jan 26 12:42:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v122: 101 pgs: 101 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 511 B/s wr, 0 op/s
Jan 26 12:42:42 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14334 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 12:42:42 np0005596060 charming_stonebraker[92507]: 
Jan 26 12:42:42 np0005596060 charming_stonebraker[92507]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 26 12:42:42 np0005596060 systemd[1]: libpod-e079bad09cdc73b3c34136268ac257523c7c39996366950f0c8607c0ebae6ca8.scope: Deactivated successfully.
Jan 26 12:42:42 np0005596060 podman[92649]: 2026-01-26 17:42:42.999642558 +0000 UTC m=+0.035160843 container died e079bad09cdc73b3c34136268ac257523c7c39996366950f0c8607c0ebae6ca8 (image=quay.io/ceph/ceph:v18, name=charming_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 12:42:43 np0005596060 systemd[1]: var-lib-containers-storage-overlay-241926aa7104b4c24eb81493a3725e25c2407ff5aebf18ab61d9f81d9df3c533-merged.mount: Deactivated successfully.
Jan 26 12:42:43 np0005596060 podman[92649]: 2026-01-26 17:42:43.045540037 +0000 UTC m=+0.081058302 container remove e079bad09cdc73b3c34136268ac257523c7c39996366950f0c8607c0ebae6ca8 (image=quay.io/ceph/ceph:v18, name=charming_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 12:42:43 np0005596060 systemd[1]: libpod-conmon-e079bad09cdc73b3c34136268ac257523c7c39996366950f0c8607c0ebae6ca8.scope: Deactivated successfully.
Jan 26 12:42:43 np0005596060 podman[92689]: 2026-01-26 17:42:43.097787633 +0000 UTC m=+0.024430547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 26 12:42:43 np0005596060 podman[92689]: 2026-01-26 17:42:43.30480734 +0000 UTC m=+0.231450234 container create ee1a68ea20520843f8c03c8e3fc25d0894e9de797c57feafb494760ea3c5b814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilson, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zjkivk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.zjkivk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: Deploying daemon rgw.rgw.compute-0.zjkivk on compute-0
Jan 26 12:42:43 np0005596060 systemd[1]: Started libpod-conmon-ee1a68ea20520843f8c03c8e3fc25d0894e9de797c57feafb494760ea3c5b814.scope.
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 26 12:42:43 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 26 12:42:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 12:42:43 np0005596060 podman[92689]: 2026-01-26 17:42:43.397616763 +0000 UTC m=+0.324259707 container init ee1a68ea20520843f8c03c8e3fc25d0894e9de797c57feafb494760ea3c5b814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilson, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:42:43 np0005596060 podman[92689]: 2026-01-26 17:42:43.408324168 +0000 UTC m=+0.334967062 container start ee1a68ea20520843f8c03c8e3fc25d0894e9de797c57feafb494760ea3c5b814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:43 np0005596060 podman[92689]: 2026-01-26 17:42:43.412103072 +0000 UTC m=+0.338746156 container attach ee1a68ea20520843f8c03c8e3fc25d0894e9de797c57feafb494760ea3c5b814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 12:42:43 np0005596060 priceless_wilson[92705]: 167 167
Jan 26 12:42:43 np0005596060 systemd[1]: libpod-ee1a68ea20520843f8c03c8e3fc25d0894e9de797c57feafb494760ea3c5b814.scope: Deactivated successfully.
Jan 26 12:42:43 np0005596060 podman[92689]: 2026-01-26 17:42:43.415707311 +0000 UTC m=+0.342350205 container died ee1a68ea20520843f8c03c8e3fc25d0894e9de797c57feafb494760ea3c5b814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 26 12:42:43 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2b9ea4a6a86d2f681f67d3a2fe579a94475acf19888c8aa92ae0b8358fd15b6a-merged.mount: Deactivated successfully.
Jan 26 12:42:43 np0005596060 podman[92689]: 2026-01-26 17:42:43.467473156 +0000 UTC m=+0.394116080 container remove ee1a68ea20520843f8c03c8e3fc25d0894e9de797c57feafb494760ea3c5b814 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:42:43 np0005596060 systemd[1]: libpod-conmon-ee1a68ea20520843f8c03c8e3fc25d0894e9de797c57feafb494760ea3c5b814.scope: Deactivated successfully.
Jan 26 12:42:43 np0005596060 systemd[1]: Reloading.
Jan 26 12:42:43 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:42:43 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:42:43 np0005596060 systemd[1]: Reloading.
Jan 26 12:42:43 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:42:43 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:42:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:42:43
Jan 26 12:42:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:42:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Some PGs (0.009804) are unknown; try again later
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:42:44 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 41 pg[9.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:42:44 np0005596060 systemd[1]: Starting Ceph rgw.rgw.compute-0.zjkivk for d4cd1917-5876-51b6-bc64-65a16199754d...
Jan 26 12:42:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 26 12:42:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 26 12:42:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 26 12:42:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 26 12:42:44 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 26 12:42:44 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.102:0/882037292' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 12:42:44 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 12:42:44 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.101:0/942420072' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 12:42:44 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 26 12:42:44 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 42 pg[9.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [1] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:44 np0005596060 python3[92829]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:44 np0005596060 podman[92857]: 2026-01-26 17:42:44.495072552 +0000 UTC m=+0.057486347 container create 4abcabe7fbafe98f02e61b3725bca73999e6d997a5e7515e08e8cec57b06e8ce (image=quay.io/ceph/ceph:v18, name=focused_faraday, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 12:42:44 np0005596060 systemd[1]: Started libpod-conmon-4abcabe7fbafe98f02e61b3725bca73999e6d997a5e7515e08e8cec57b06e8ce.scope.
Jan 26 12:42:44 np0005596060 podman[92857]: 2026-01-26 17:42:44.469922028 +0000 UTC m=+0.032335833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:44 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648d5eabcadc3d930d9028b1f247b0aa1b3bda4cc6c62abf6e1aba2ff4a4a814/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/648d5eabcadc3d930d9028b1f247b0aa1b3bda4cc6c62abf6e1aba2ff4a4a814/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:44 np0005596060 podman[92894]: 2026-01-26 17:42:44.580098382 +0000 UTC m=+0.062464461 container create 720c787ea3907bace9cfccb4f3bd93e43668dde5c1297a0a4ad9aca79b5ef4df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-rgw-rgw-compute-0-zjkivk, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:44 np0005596060 podman[92894]: 2026-01-26 17:42:44.546899498 +0000 UTC m=+0.029265547 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:44 np0005596060 podman[92857]: 2026-01-26 17:42:44.692118451 +0000 UTC m=+0.254532316 container init 4abcabe7fbafe98f02e61b3725bca73999e6d997a5e7515e08e8cec57b06e8ce (image=quay.io/ceph/ceph:v18, name=focused_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Jan 26 12:42:44 np0005596060 podman[92857]: 2026-01-26 17:42:44.703512674 +0000 UTC m=+0.265926499 container start 4abcabe7fbafe98f02e61b3725bca73999e6d997a5e7515e08e8cec57b06e8ce (image=quay.io/ceph/ceph:v18, name=focused_faraday, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:44 np0005596060 ansible-async_wrapper.py[92304]: Done in kid B.
Jan 26 12:42:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v125: 102 pgs: 1 unknown, 101 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Jan 26 12:42:44 np0005596060 podman[92857]: 2026-01-26 17:42:44.889478088 +0000 UTC m=+0.451891873 container attach 4abcabe7fbafe98f02e61b3725bca73999e6d997a5e7515e08e8cec57b06e8ce (image=quay.io/ceph/ceph:v18, name=focused_faraday, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 26 12:42:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1584d8cf54ec906a4aea4f97dd5403260b6e4ade6bf279e59e25d14cb998cb4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1584d8cf54ec906a4aea4f97dd5403260b6e4ade6bf279e59e25d14cb998cb4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1584d8cf54ec906a4aea4f97dd5403260b6e4ade6bf279e59e25d14cb998cb4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1584d8cf54ec906a4aea4f97dd5403260b6e4ade6bf279e59e25d14cb998cb4f/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.zjkivk supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:44 np0005596060 podman[92894]: 2026-01-26 17:42:44.918518579 +0000 UTC m=+0.400884638 container init 720c787ea3907bace9cfccb4f3bd93e43668dde5c1297a0a4ad9aca79b5ef4df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-rgw-rgw-compute-0-zjkivk, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:44 np0005596060 podman[92894]: 2026-01-26 17:42:44.931959242 +0000 UTC m=+0.414325291 container start 720c787ea3907bace9cfccb4f3bd93e43668dde5c1297a0a4ad9aca79b5ef4df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-rgw-rgw-compute-0-zjkivk, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:44 np0005596060 bash[92894]: 720c787ea3907bace9cfccb4f3bd93e43668dde5c1297a0a4ad9aca79b5ef4df
Jan 26 12:42:44 np0005596060 systemd[1]: Started Ceph rgw.rgw.compute-0.zjkivk for d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:42:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 radosgw[92919]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 26 12:42:45 np0005596060 radosgw[92919]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 26 12:42:45 np0005596060 radosgw[92919]: framework: beast
Jan 26 12:42:45 np0005596060 radosgw[92919]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 26 12:42:45 np0005596060 radosgw[92919]: init_numa not setting numa affinity
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 73e9ab06-5e0f-418e-b52d-652c1c5662d3 (Updating rgw.rgw deployment (+3 -> 3))
Jan 26 12:42:45 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 73e9ab06-5e0f-418e-b52d-652c1c5662d3 (Updating rgw.rgw deployment (+3 -> 3)) in 7 seconds
Jan 26 12:42:45 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:45 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 5ae9cb46-8164-4505-9e86-735ef09fa719 (Updating mds.cephfs deployment (+3 -> 3))
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.oqvedy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.oqvedy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.oqvedy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:45 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.oqvedy on compute-2
Jan 26 12:42:45 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.oqvedy on compute-2
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:45 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14346 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 12:42:45 np0005596060 focused_faraday[92907]: 
Jan 26 12:42:45 np0005596060 focused_faraday[92907]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 26 12:42:45 np0005596060 systemd[1]: libpod-4abcabe7fbafe98f02e61b3725bca73999e6d997a5e7515e08e8cec57b06e8ce.scope: Deactivated successfully.
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 26 12:42:45 np0005596060 conmon[92907]: conmon 4abcabe7fbafe98f02e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4abcabe7fbafe98f02e61b3725bca73999e6d997a5e7515e08e8cec57b06e8ce.scope/container/memory.events
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 26 12:42:45 np0005596060 podman[93002]: 2026-01-26 17:42:45.460029805 +0000 UTC m=+0.031624086 container died 4abcabe7fbafe98f02e61b3725bca73999e6d997a5e7515e08e8cec57b06e8ce (image=quay.io/ceph/ceph:v18, name=focused_faraday, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/814761124' entity='client.rgw.rgw.compute-0.zjkivk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 12:42:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-648d5eabcadc3d930d9028b1f247b0aa1b3bda4cc6c62abf6e1aba2ff4a4a814-merged.mount: Deactivated successfully.
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.oqvedy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.oqvedy", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 12:42:45 np0005596060 podman[93002]: 2026-01-26 17:42:45.589491497 +0000 UTC m=+0.161085768 container remove 4abcabe7fbafe98f02e61b3725bca73999e6d997a5e7515e08e8cec57b06e8ce (image=quay.io/ceph/ceph:v18, name=focused_faraday, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 12:42:45 np0005596060 systemd[1]: libpod-conmon-4abcabe7fbafe98f02e61b3725bca73999e6d997a5e7515e08e8cec57b06e8ce.scope: Deactivated successfully.
Jan 26 12:42:45 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 11 completed events
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:42:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:45 np0005596060 ceph-mgr[74563]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/814761124' entity='client.rgw.rgw.compute-0.zjkivk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 26 12:42:46 np0005596060 python3[93042]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: Deploying daemon mds.cephfs.compute-2.oqvedy on compute-2
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.102:0/882037292' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.101:0/942420072' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/814761124' entity='client.rgw.rgw.compute-0.zjkivk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/814761124' entity='client.rgw.rgw.compute-0.zjkivk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 26 12:42:46 np0005596060 podman[93043]: 2026-01-26 17:42:46.592547355 +0000 UTC m=+0.047770676 container create 22b50a59cc0b245fb43a5136ede7f57d4e8ae1d0466315d451175a18bf8ef3a4 (image=quay.io/ceph/ceph:v18, name=trusting_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 12:42:46 np0005596060 systemd[1]: Started libpod-conmon-22b50a59cc0b245fb43a5136ede7f57d4e8ae1d0466315d451175a18bf8ef3a4.scope.
Jan 26 12:42:46 np0005596060 podman[93043]: 2026-01-26 17:42:46.571688507 +0000 UTC m=+0.026911808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b74ebb34d744607fd5e8d1e1e5f672b4a25ae1b482e2d23af069d18334d8aa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35b74ebb34d744607fd5e8d1e1e5f672b4a25ae1b482e2d23af069d18334d8aa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:46 np0005596060 podman[93043]: 2026-01-26 17:42:46.696569176 +0000 UTC m=+0.151792557 container init 22b50a59cc0b245fb43a5136ede7f57d4e8ae1d0466315d451175a18bf8ef3a4 (image=quay.io/ceph/ceph:v18, name=trusting_jones, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:46 np0005596060 podman[93043]: 2026-01-26 17:42:46.71043526 +0000 UTC m=+0.165658581 container start 22b50a59cc0b245fb43a5136ede7f57d4e8ae1d0466315d451175a18bf8ef3a4 (image=quay.io/ceph/ceph:v18, name=trusting_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:46 np0005596060 podman[93043]: 2026-01-26 17:42:46.714502091 +0000 UTC m=+0.169725412 container attach 22b50a59cc0b245fb43a5136ede7f57d4e8ae1d0466315d451175a18bf8ef3a4 (image=quay.io/ceph/ceph:v18, name=trusting_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 26 12:42:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v128: 103 pgs: 2 unknown, 101 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:42:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wenkwv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wenkwv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wenkwv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:47 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.wenkwv on compute-0
Jan 26 12:42:47 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.wenkwv on compute-0
Jan 26 12:42:47 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.14352 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 26 12:42:47 np0005596060 trusting_jones[93058]: 
Jan 26 12:42:47 np0005596060 trusting_jones[93058]: [{"container_id": "2653e44b26a1", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.67%", "created": "2026-01-26T17:40:10.998239Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-26T17:40:11.063144Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-26T17:41:15.464838Z", "memory_usage": 11618222, "ports": [], "service_name": "crash", "started": "2026-01-26T17:40:10.892189Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d4cd1917-5876-51b6-bc64-65a16199754d@crash.compute-0", "version": "18.2.7"}, {"container_id": "5ff2518cec59", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.82%", "created": "2026-01-26T17:40:55.938366Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2026-01-26T17:40:55.997990Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-26T17:42:18.544045Z", "memory_usage": 11723079, "ports": [], "service_name": "crash", "started": "2026-01-26T17:40:55.824228Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d4cd1917-5876-51b6-bc64-65a16199754d@crash.compute-1", "version": "18.2.7"}, {"container_id": "518fd5dcf420", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.52%", "created": "2026-01-26T17:42:00.652133Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2026-01-26T17:42:00.724441Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-26T17:42:18.869614Z", "memory_usage": 11660165, "ports": [], "service_name": "crash", "started": "2026-01-26T17:41:59.110736Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d4cd1917-5876-51b6-bc64-65a16199754d@crash.compute-2", "version": "18.2.7"}, {"daemon_id": "cephfs.compute-2.oqvedy", "daemon_name": "mds.cephfs.compute-2.oqvedy", "daemon_type": "mds", "events": ["2026-01-26T17:42:47.031076Z daemon:mds.cephfs.compute-2.oqvedy [INFO] \"Deployed mds.cephfs.compute-2.oqvedy on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "c9380c6bab6f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "39.53%", "created": "2026-01-26T17:38:56.856813Z", "daemon_id": "compute-0.mbryrf", "daemon_name": "mgr.compute-0.mbryrf", "daemon_type": "mgr", "events": ["2026-01-26T17:40:17.337241Z daemon:mgr.compute-0.mbryrf [INFO] \"Reconfigured mgr.compute-0.mbryrf on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-26T17:41:15.464774Z", "memory_usage": 546622668, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-26T17:38:56.761049Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d4cd1917-5876-51b6-bc64-65a16199754d@mgr.compute-0.mbryrf", "version": "18.2.7"}, {"container_id": "43c1b9e3b3bf", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "95.84%", "created": "2026-01-26T17:41:57.082975Z", "daemon_id": "compute-1.qpyzhk", "daemon_name": "mgr.compute-1.qpyzhk", "daemon_type": "mgr", "events": ["2026-01-26T17:41:57.246454Z daemon:mgr.compute-1.qpyzhk [INFO] \"Deployed mgr.compute-1.qpyzhk on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-26T17:42:18.544583Z", "memory_usage": 513277952, "ports": [8765], "service_name": "mgr", "started": "2026-01-26T17:41:56.985241Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d4cd1917-5876-51b6-bc64-65a16199754d@mgr.compute-1.qpyzhk", "version": "18.2.7"}, {"container_id": "73cb5682238b", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "74.55%", "created": "2026-01-26T17:41:50.696867Z", "daemon_id": "compute-2.cchxrf", "daemon_name": "mgr.compute-2.cchxrf", "daemon_type": "mgr", "events": ["2026-01-26T17:41:55.187963Z daemon:mgr.compute-2.cchxrf [INFO] \"Deployed mgr.compute-2.cchxrf on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-26T17:42:18.869542Z", "memory_usage": 512334233, "ports": [8765], "service_name": "mgr", "started": "2026-01-26T17:41:50.588389Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d4cd1917-5876-51b6-bc64-65a16199754d@mgr.compute-2.cchxrf", "version": "18.2.7"}, {"container_id": "ebd9c630f931", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.65%", "created": "2026-01-26T17:38:51.675682Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-26T17:40:15.747477Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-26T17:41:15.464682Z", "memory_request": 2147483648, "memory_usage": 30974935, "ports": [], "service_name": "mon", "started": "2026-01-26T17:38:54.508030Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-d4cd1917-5876-51b6-bc64-65a16199754d@mon.compute-0", "version": "18.2.7"}, {"container_id": "9648c8436bce", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_i
Jan 26 12:42:47 np0005596060 systemd[1]: libpod-22b50a59cc0b245fb43a5136ede7f57d4e8ae1d0466315d451175a18bf8ef3a4.scope: Deactivated successfully.
Jan 26 12:42:47 np0005596060 podman[93043]: 2026-01-26 17:42:47.275372577 +0000 UTC m=+0.730595898 container died 22b50a59cc0b245fb43a5136ede7f57d4e8ae1d0466315d451175a18bf8ef3a4 (image=quay.io/ceph/ceph:v18, name=trusting_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 12:42:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-35b74ebb34d744607fd5e8d1e1e5f672b4a25ae1b482e2d23af069d18334d8aa-merged.mount: Deactivated successfully.
Jan 26 12:42:47 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 26 12:42:47 np0005596060 podman[93043]: 2026-01-26 17:42:47.332755741 +0000 UTC m=+0.787979032 container remove 22b50a59cc0b245fb43a5136ede7f57d4e8ae1d0466315d451175a18bf8ef3a4 (image=quay.io/ceph/ceph:v18, name=trusting_jones, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:47 np0005596060 systemd[1]: libpod-conmon-22b50a59cc0b245fb43a5136ede7f57d4e8ae1d0466315d451175a18bf8ef3a4.scope: Deactivated successfully.
Jan 26 12:42:47 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wenkwv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wenkwv", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 12:42:47 np0005596060 rsyslogd[1005]: message too long (14383) with configured size 8096, begin of message is: [{"container_id": "2653e44b26a1", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e3 new map
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-26T17:42:22.558304+0000#012modified#0112026-01-26T17:42:22.558341+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.oqvedy{-1:24157} state up:standby seq 1 addr [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] compat {c=[1],r=[1],i=[7ff]}]
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] up:boot
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] as mds.0
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.oqvedy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.oqvedy"} v 0) v1
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.oqvedy"}]: dispatch
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e3 all = 0
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e4 new map
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-26T17:42:22.558304+0000#012modified#0112026-01-26T17:42:47.614358+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.oqvedy{0:24157} state up:creating seq 1 addr [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.oqvedy=up:creating}
Jan 26 12:42:47 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.oqvedy is now active in filesystem cephfs as rank 0
Jan 26 12:42:47 np0005596060 podman[93252]: 2026-01-26 17:42:47.737153135 +0000 UTC m=+0.049440218 container create 3f282f130337bfb1e727b6431d5fa31bb190d2ae74ea1bcc5a905bc1eca30324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:42:47 np0005596060 systemd[1]: Started libpod-conmon-3f282f130337bfb1e727b6431d5fa31bb190d2ae74ea1bcc5a905bc1eca30324.scope.
Jan 26 12:42:47 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:47 np0005596060 podman[93252]: 2026-01-26 17:42:47.801284616 +0000 UTC m=+0.113571729 container init 3f282f130337bfb1e727b6431d5fa31bb190d2ae74ea1bcc5a905bc1eca30324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 26 12:42:47 np0005596060 podman[93252]: 2026-01-26 17:42:47.808587698 +0000 UTC m=+0.120874771 container start 3f282f130337bfb1e727b6431d5fa31bb190d2ae74ea1bcc5a905bc1eca30324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 12:42:47 np0005596060 podman[93252]: 2026-01-26 17:42:47.714626886 +0000 UTC m=+0.026913989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:47 np0005596060 clever_chandrasekhar[93269]: 167 167
Jan 26 12:42:47 np0005596060 systemd[1]: libpod-3f282f130337bfb1e727b6431d5fa31bb190d2ae74ea1bcc5a905bc1eca30324.scope: Deactivated successfully.
Jan 26 12:42:47 np0005596060 conmon[93269]: conmon 3f282f130337bfb1e727 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3f282f130337bfb1e727b6431d5fa31bb190d2ae74ea1bcc5a905bc1eca30324.scope/container/memory.events
Jan 26 12:42:47 np0005596060 podman[93252]: 2026-01-26 17:42:47.813690024 +0000 UTC m=+0.125977127 container attach 3f282f130337bfb1e727b6431d5fa31bb190d2ae74ea1bcc5a905bc1eca30324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:47 np0005596060 podman[93252]: 2026-01-26 17:42:47.814365871 +0000 UTC m=+0.126652974 container died 3f282f130337bfb1e727b6431d5fa31bb190d2ae74ea1bcc5a905bc1eca30324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:42:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b7cfb7f208cdc0866ee0b1b435233f6da6123028bd23dc50dfbe586ab416d0cc-merged.mount: Deactivated successfully.
Jan 26 12:42:47 np0005596060 podman[93252]: 2026-01-26 17:42:47.8546206 +0000 UTC m=+0.166907703 container remove 3f282f130337bfb1e727b6431d5fa31bb190d2ae74ea1bcc5a905bc1eca30324 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chandrasekhar, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 12:42:47 np0005596060 systemd[1]: libpod-conmon-3f282f130337bfb1e727b6431d5fa31bb190d2ae74ea1bcc5a905bc1eca30324.scope: Deactivated successfully.
Jan 26 12:42:47 np0005596060 systemd[1]: Reloading.
Jan 26 12:42:48 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:42:48 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:42:48 np0005596060 systemd[1]: Reloading.
Jan 26 12:42:48 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 26 12:42:48 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 26 12:42:48 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:42:48 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1497594370' entity='client.rgw.rgw.compute-0.zjkivk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 12:42:48 np0005596060 systemd[1]: Starting Ceph mds.cephfs.compute-0.wenkwv for d4cd1917-5876-51b6-bc64-65a16199754d...
Jan 26 12:42:48 np0005596060 python3[93390]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: Deploying daemon mds.cephfs.compute-0.wenkwv on compute-0
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: daemon mds.cephfs.compute-2.oqvedy assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: Cluster is now healthy
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: daemon mds.cephfs.compute-2.oqvedy is now active in filesystem cephfs as rank 0
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1497594370' entity='client.rgw.rgw.compute-0.zjkivk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.101:0/3727664983' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.102:0/1450920383' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 26 12:42:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v131: 104 pgs: 2 unknown, 102 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 1 op/s
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e5 new map
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-26T17:42:22.558304+0000#012modified#0112026-01-26T17:42:48.628872+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.oqvedy{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 26 12:42:48 np0005596060 podman[93415]: 2026-01-26 17:42:48.788666985 +0000 UTC m=+0.051955340 container create 538746fec805c09f4393f35135c088f7ac29a15dba704ca7f5913dbd10252bca (image=quay.io/ceph/ceph:v18, name=optimistic_jennings, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] up:active
Jan 26 12:42:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.oqvedy=up:active}
Jan 26 12:42:48 np0005596060 systemd[1]: Started libpod-conmon-538746fec805c09f4393f35135c088f7ac29a15dba704ca7f5913dbd10252bca.scope.
Jan 26 12:42:48 np0005596060 podman[93415]: 2026-01-26 17:42:48.767804577 +0000 UTC m=+0.031092952 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:48 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969278ec1e15906eeeb35143d50bd8b6385e080f70fbaba494d10a18e20f355d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969278ec1e15906eeeb35143d50bd8b6385e080f70fbaba494d10a18e20f355d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:48 np0005596060 podman[93453]: 2026-01-26 17:42:48.884226066 +0000 UTC m=+0.044821223 container create eb4ff9b731661331732f7f49a1b2cb9d3f4fc6fc7d96596aacfc53c05bcef6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mds-cephfs-compute-0-wenkwv, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 12:42:48 np0005596060 podman[93415]: 2026-01-26 17:42:48.927721916 +0000 UTC m=+0.191010321 container init 538746fec805c09f4393f35135c088f7ac29a15dba704ca7f5913dbd10252bca (image=quay.io/ceph/ceph:v18, name=optimistic_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 12:42:48 np0005596060 podman[93415]: 2026-01-26 17:42:48.936283878 +0000 UTC m=+0.199572233 container start 538746fec805c09f4393f35135c088f7ac29a15dba704ca7f5913dbd10252bca (image=quay.io/ceph/ceph:v18, name=optimistic_jennings, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:48 np0005596060 podman[93415]: 2026-01-26 17:42:48.940424301 +0000 UTC m=+0.203712706 container attach 538746fec805c09f4393f35135c088f7ac29a15dba704ca7f5913dbd10252bca (image=quay.io/ceph/ceph:v18, name=optimistic_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 26 12:42:48 np0005596060 podman[93453]: 2026-01-26 17:42:48.865075291 +0000 UTC m=+0.025670468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:42:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5dc5a836266a325be0de9b5c6041bc7ac3538cd2450485d2b116c79c7c3795e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5dc5a836266a325be0de9b5c6041bc7ac3538cd2450485d2b116c79c7c3795e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5dc5a836266a325be0de9b5c6041bc7ac3538cd2450485d2b116c79c7c3795e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5dc5a836266a325be0de9b5c6041bc7ac3538cd2450485d2b116c79c7c3795e/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.wenkwv supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:48 np0005596060 podman[93453]: 2026-01-26 17:42:48.977973962 +0000 UTC m=+0.138569119 container init eb4ff9b731661331732f7f49a1b2cb9d3f4fc6fc7d96596aacfc53c05bcef6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mds-cephfs-compute-0-wenkwv, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 26 12:42:48 np0005596060 podman[93453]: 2026-01-26 17:42:48.987831287 +0000 UTC m=+0.148426444 container start eb4ff9b731661331732f7f49a1b2cb9d3f4fc6fc7d96596aacfc53c05bcef6c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mds-cephfs-compute-0-wenkwv, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:48 np0005596060 bash[93453]: eb4ff9b731661331732f7f49a1b2cb9d3f4fc6fc7d96596aacfc53c05bcef6c8
Jan 26 12:42:48 np0005596060 systemd[1]: Started Ceph mds.cephfs.compute-0.wenkwv for d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:42:49 np0005596060 ceph-mds[93477]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 12:42:49 np0005596060 ceph-mds[93477]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 26 12:42:49 np0005596060 ceph-mds[93477]: main not setting numa affinity
Jan 26 12:42:49 np0005596060 ceph-mds[93477]: pidfile_write: ignore empty --pid-file
Jan 26 12:42:49 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mds-cephfs-compute-0-wenkwv[93473]: starting mds.cephfs.compute-0.wenkwv at 
Jan 26 12:42:49 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Updating MDS map to version 5 from mon.0
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:42:49 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 46 pg[11.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oxxatt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oxxatt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oxxatt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.oxxatt on compute-1
Jan 26 12:42:49 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.oxxatt on compute-1
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1497594370' entity='client.rgw.rgw.compute-0.zjkivk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1497594370' entity='client.rgw.rgw.compute-0.zjkivk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 47 pg[11.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [1] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2303791623' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 26 12:42:49 np0005596060 optimistic_jennings[93459]: 
Jan 26 12:42:49 np0005596060 optimistic_jennings[93459]: {"fsid":"d4cd1917-5876-51b6-bc64-65a16199754d","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":54,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":47,"num_osds":3,"num_up_osds":3,"osd_up_since":1769449341,"num_in_osds":3,"osd_in_since":1769449323,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":101},{"state_name":"unknown","count":2}],"num_pgs":103,"num_pools":10,"num_objects":6,"data_bytes":460666,"bytes_used":84135936,"bytes_avail":22451859456,"bytes_total":22535995392,"unknown_pgs_ratio":0.019417475908994675},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.oqvedy","status":"up:active","gid":24157}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-26T17:42:22.773224+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.qpyzhk":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.cchxrf":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"5ae9cb46-8164-4505-9e86-735ef09fa719":{"message":"Updating mds.cephfs deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true},"c2c2ff3c-f8d9-4784-a489-b911b62b7f4d":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 26 12:42:49 np0005596060 systemd[1]: libpod-538746fec805c09f4393f35135c088f7ac29a15dba704ca7f5913dbd10252bca.scope: Deactivated successfully.
Jan 26 12:42:49 np0005596060 podman[93415]: 2026-01-26 17:42:49.557349128 +0000 UTC m=+0.820637513 container died 538746fec805c09f4393f35135c088f7ac29a15dba704ca7f5913dbd10252bca (image=quay.io/ceph/ceph:v18, name=optimistic_jennings, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay-969278ec1e15906eeeb35143d50bd8b6385e080f70fbaba494d10a18e20f355d-merged.mount: Deactivated successfully.
Jan 26 12:42:49 np0005596060 podman[93415]: 2026-01-26 17:42:49.615986383 +0000 UTC m=+0.879274738 container remove 538746fec805c09f4393f35135c088f7ac29a15dba704ca7f5913dbd10252bca (image=quay.io/ceph/ceph:v18, name=optimistic_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:42:49 np0005596060 systemd[1]: libpod-conmon-538746fec805c09f4393f35135c088f7ac29a15dba704ca7f5913dbd10252bca.scope: Deactivated successfully.
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oxxatt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.oxxatt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: Deploying daemon mds.cephfs.compute-1.oxxatt on compute-1
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1497594370' entity='client.rgw.rgw.compute-0.zjkivk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1497594370' entity='client.rgw.rgw.compute-0.zjkivk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.101:0/3727664983' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.102:0/1450920383' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e6 new map
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-26T17:42:22.558304+0000#012modified#0112026-01-26T17:42:48.628872+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.oqvedy{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wenkwv{-1:14373} state up:standby seq 1 addr [v2:192.168.122.100:6806/1188189847,v1:192.168.122.100:6807/1188189847] compat {c=[1],r=[1],i=[7ff]}]
Jan 26 12:42:49 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Updating MDS map to version 6 from mon.0
Jan 26 12:42:49 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Monitors have assigned me to become a standby.
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/1188189847,v1:192.168.122.100:6807/1188189847] up:boot
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.oqvedy=up:active} 1 up:standby
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.wenkwv"} v 0) v1
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wenkwv"}]: dispatch
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e6 all = 0
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e7 new map
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-26T17:42:22.558304+0000#012modified#0112026-01-26T17:42:48.628872+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.oqvedy{0:24157} state up:active seq 2 addr [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wenkwv{-1:14373} state up:standby seq 1 addr [v2:192.168.122.100:6806/1188189847,v1:192.168.122.100:6807/1188189847] compat {c=[1],r=[1],i=[7ff]}]
Jan 26 12:42:49 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.oqvedy=up:active} 1 up:standby
Jan 26 12:42:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:50 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 26 12:42:50 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 26 12:42:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 26 12:42:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v133: 104 pgs: 2 unknown, 102 active+clean; 450 KiB data, 80 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 1 op/s
Jan 26 12:42:50 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event c2c2ff3c-f8d9-4784-a489-b911b62b7f4d (Global Recovery Event) in 5 seconds
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1497594370' entity='client.rgw.rgw.compute-0.zjkivk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 26 12:42:51 np0005596060 python3[93554]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:51 np0005596060 podman[93555]: 2026-01-26 17:42:51.359661008 +0000 UTC m=+0.129248339 container create fb0d2806e21f6e8c95a234dcfe2d3ec663103acbe83889ebca58cbd71945120d (image=quay.io/ceph/ceph:v18, name=upbeat_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 26 12:42:51 np0005596060 podman[93555]: 2026-01-26 17:42:51.270554276 +0000 UTC m=+0.040141627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 26 12:42:51 np0005596060 systemd[1]: Started libpod-conmon-fb0d2806e21f6e8c95a234dcfe2d3ec663103acbe83889ebca58cbd71945120d.scope.
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:51 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 5ae9cb46-8164-4505-9e86-735ef09fa719 (Updating mds.cephfs deployment (+3 -> 3))
Jan 26 12:42:51 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 5ae9cb46-8164-4505-9e86-735ef09fa719 (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Jan 26 12:42:51 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd51b99cb67d5bbdd59bc8a3186558b4e70de8f5e81219e0e283664438e6798/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd51b99cb67d5bbdd59bc8a3186558b4e70de8f5e81219e0e283664438e6798/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 26 12:42:51 np0005596060 podman[93555]: 2026-01-26 17:42:51.474795053 +0000 UTC m=+0.244382404 container init fb0d2806e21f6e8c95a234dcfe2d3ec663103acbe83889ebca58cbd71945120d (image=quay.io/ceph/ceph:v18, name=upbeat_roentgen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 12:42:51 np0005596060 podman[93555]: 2026-01-26 17:42:51.481236433 +0000 UTC m=+0.250823764 container start fb0d2806e21f6e8c95a234dcfe2d3ec663103acbe83889ebca58cbd71945120d (image=quay.io/ceph/ceph:v18, name=upbeat_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:51 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 7a082bae-676a-4392-8f97-159eb655d715 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 26 12:42:51 np0005596060 podman[93555]: 2026-01-26 17:42:51.486830292 +0000 UTC m=+0.256417643 container attach fb0d2806e21f6e8c95a234dcfe2d3ec663103acbe83889ebca58cbd71945120d (image=quay.io/ceph/ceph:v18, name=upbeat_roentgen, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Jan 26 12:42:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:51 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.wyazzh on compute-0
Jan 26 12:42:51 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.wyazzh on compute-0
Jan 26 12:42:51 np0005596060 radosgw[92919]: LDAP not started since no server URIs were provided in the configuration.
Jan 26 12:42:51 np0005596060 radosgw[92919]: framework: beast
Jan 26 12:42:51 np0005596060 radosgw[92919]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 26 12:42:51 np0005596060 radosgw[92919]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 26 12:42:51 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-rgw-rgw-compute-0-zjkivk[92915]: 2026-01-26T17:42:51.547+0000 7fc455866940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 26 12:42:51 np0005596060 radosgw[92919]: starting handler: beast
Jan 26 12:42:51 np0005596060 radosgw[92919]: set uid:gid to 167:167 (ceph:ceph)
Jan 26 12:42:51 np0005596060 radosgw[92919]: mgrc service_daemon_register rgw.14358 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.zjkivk,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=16904eab-39d2-4595-a23c-00ad5300f474,zone_name=default,zonegroup_id=7e31b727-958b-45d2-9b71-ae42e4dae024,zonegroup_name=default}
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475757103' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 26 12:42:52 np0005596060 upbeat_roentgen[93571]: 
Jan 26 12:42:52 np0005596060 upbeat_roentgen[93571]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502923980","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.zjkivk","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.dudysi","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.vncnzm","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 26 12:42:52 np0005596060 systemd[1]: libpod-fb0d2806e21f6e8c95a234dcfe2d3ec663103acbe83889ebca58cbd71945120d.scope: Deactivated successfully.
Jan 26 12:42:52 np0005596060 podman[93555]: 2026-01-26 17:42:52.0832141 +0000 UTC m=+0.852801431 container died fb0d2806e21f6e8c95a234dcfe2d3ec663103acbe83889ebca58cbd71945120d (image=quay.io/ceph/ceph:v18, name=upbeat_roentgen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:52 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2bd51b99cb67d5bbdd59bc8a3186558b4e70de8f5e81219e0e283664438e6798-merged.mount: Deactivated successfully.
Jan 26 12:42:52 np0005596060 podman[93555]: 2026-01-26 17:42:52.131360604 +0000 UTC m=+0.900947935 container remove fb0d2806e21f6e8c95a234dcfe2d3ec663103acbe83889ebca58cbd71945120d (image=quay.io/ceph/ceph:v18, name=upbeat_roentgen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:52 np0005596060 systemd[1]: libpod-conmon-fb0d2806e21f6e8c95a234dcfe2d3ec663103acbe83889ebca58cbd71945120d.scope: Deactivated successfully.
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: from='client.? 192.168.122.100:0/1497594370' entity='client.rgw.rgw.compute-0.zjkivk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-2.vncnzm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: from='client.? ' entity='client.rgw.rgw.compute-1.dudysi' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: Cluster is now healthy
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: Deploying daemon haproxy.rgw.default.compute-0.wyazzh on compute-0
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e8 new map
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-26T17:42:22.558304+0000#012modified#0112026-01-26T17:42:52.412531+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.oqvedy{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wenkwv{-1:14373} state up:standby seq 1 addr [v2:192.168.122.100:6806/1188189847,v1:192.168.122.100:6807/1188189847] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.oxxatt{-1:24149} state up:standby seq 1 addr [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] compat {c=[1],r=[1],i=[7ff]}]
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] up:boot
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] up:active
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.oqvedy=up:active} 2 up:standby
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.oxxatt"} v 0) v1
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.oxxatt"}]: dispatch
Jan 26 12:42:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e8 all = 0
Jan 26 12:42:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v135: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 191 KiB/s rd, 9.0 KiB/s wr, 344 op/s
Jan 26 12:42:53 np0005596060 python3[94351]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:53 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 26 12:42:53 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 26 12:42:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e9 new map
Jan 26 12:42:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-26T17:42:22.558304+0000#012modified#0112026-01-26T17:42:52.412531+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.oqvedy{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wenkwv{-1:14373} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1188189847,v1:192.168.122.100:6807/1188189847] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.oxxatt{-1:24149} state up:standby seq 1 addr [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] compat {c=[1],r=[1],i=[7ff]}]
Jan 26 12:42:53 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/1188189847,v1:192.168.122.100:6807/1188189847] up:standby
Jan 26 12:42:53 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.oqvedy=up:active} 2 up:standby
Jan 26 12:42:53 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Updating MDS map to version 9 from mon.0
Jan 26 12:42:54 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 26 12:42:54 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 26 12:42:54 np0005596060 podman[94352]: 2026-01-26 17:42:54.539308352 +0000 UTC m=+1.288890511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:54 np0005596060 podman[94352]: 2026-01-26 17:42:54.597499176 +0000 UTC m=+1.347081305 container create 37f4a5abe4244b9a627c995bac1a0993940c4192b74f7d492df2b27246387bfa (image=quay.io/ceph/ceph:v18, name=serene_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 12:42:54 np0005596060 podman[94277]: 2026-01-26 17:42:54.633042548 +0000 UTC m=+2.554253699 container create 9f645e5399f9964c618097b883de6ac582d5bc16bdad9531e826a7cd1d0cc187 (image=quay.io/ceph/haproxy:2.3, name=loving_banach)
Jan 26 12:42:54 np0005596060 systemd[1]: Started libpod-conmon-37f4a5abe4244b9a627c995bac1a0993940c4192b74f7d492df2b27246387bfa.scope.
Jan 26 12:42:54 np0005596060 podman[94277]: 2026-01-26 17:42:54.618071886 +0000 UTC m=+2.539283057 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 26 12:42:54 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:54 np0005596060 systemd[1]: Started libpod-conmon-9f645e5399f9964c618097b883de6ac582d5bc16bdad9531e826a7cd1d0cc187.scope.
Jan 26 12:42:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463ed71aecc67a63945b4ce0266ec15514149176c533e8e2d0d7059b8be4efbc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/463ed71aecc67a63945b4ce0266ec15514149176c533e8e2d0d7059b8be4efbc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:54 np0005596060 podman[94352]: 2026-01-26 17:42:54.676487736 +0000 UTC m=+1.426069915 container init 37f4a5abe4244b9a627c995bac1a0993940c4192b74f7d492df2b27246387bfa (image=quay.io/ceph/ceph:v18, name=serene_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 12:42:54 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:54 np0005596060 podman[94352]: 2026-01-26 17:42:54.689287483 +0000 UTC m=+1.438869622 container start 37f4a5abe4244b9a627c995bac1a0993940c4192b74f7d492df2b27246387bfa (image=quay.io/ceph/ceph:v18, name=serene_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:42:54 np0005596060 podman[94277]: 2026-01-26 17:42:54.692740359 +0000 UTC m=+2.613951580 container init 9f645e5399f9964c618097b883de6ac582d5bc16bdad9531e826a7cd1d0cc187 (image=quay.io/ceph/haproxy:2.3, name=loving_banach)
Jan 26 12:42:54 np0005596060 podman[94352]: 2026-01-26 17:42:54.69561056 +0000 UTC m=+1.445192699 container attach 37f4a5abe4244b9a627c995bac1a0993940c4192b74f7d492df2b27246387bfa (image=quay.io/ceph/ceph:v18, name=serene_bhabha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 12:42:54 np0005596060 podman[94277]: 2026-01-26 17:42:54.697098437 +0000 UTC m=+2.618309618 container start 9f645e5399f9964c618097b883de6ac582d5bc16bdad9531e826a7cd1d0cc187 (image=quay.io/ceph/haproxy:2.3, name=loving_banach)
Jan 26 12:42:54 np0005596060 loving_banach[94450]: 0 0
Jan 26 12:42:54 np0005596060 podman[94277]: 2026-01-26 17:42:54.700693756 +0000 UTC m=+2.621904937 container attach 9f645e5399f9964c618097b883de6ac582d5bc16bdad9531e826a7cd1d0cc187 (image=quay.io/ceph/haproxy:2.3, name=loving_banach)
Jan 26 12:42:54 np0005596060 systemd[1]: libpod-9f645e5399f9964c618097b883de6ac582d5bc16bdad9531e826a7cd1d0cc187.scope: Deactivated successfully.
Jan 26 12:42:54 np0005596060 podman[94277]: 2026-01-26 17:42:54.70204431 +0000 UTC m=+2.623255491 container died 9f645e5399f9964c618097b883de6ac582d5bc16bdad9531e826a7cd1d0cc187 (image=quay.io/ceph/haproxy:2.3, name=loving_banach)
Jan 26 12:42:54 np0005596060 systemd[1]: var-lib-containers-storage-overlay-daa1c63cad1bd9af79564f33db8d5ad5e2b9d4a96a4fbcb7cdbd38c9f83a322b-merged.mount: Deactivated successfully.
Jan 26 12:42:54 np0005596060 podman[94277]: 2026-01-26 17:42:54.748665226 +0000 UTC m=+2.669876417 container remove 9f645e5399f9964c618097b883de6ac582d5bc16bdad9531e826a7cd1d0cc187 (image=quay.io/ceph/haproxy:2.3, name=loving_banach)
Jan 26 12:42:54 np0005596060 systemd[1]: libpod-conmon-9f645e5399f9964c618097b883de6ac582d5bc16bdad9531e826a7cd1d0cc187.scope: Deactivated successfully.
Jan 26 12:42:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v136: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 162 KiB/s rd, 7.6 KiB/s wr, 291 op/s
Jan 26 12:42:54 np0005596060 systemd[1]: Reloading.
Jan 26 12:42:54 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:42:54 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:42:55 np0005596060 systemd[1]: Reloading.
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 1)
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 1)
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:42:55 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1114002205' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 26 12:42:55 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:42:55 np0005596060 serene_bhabha[94446]: mimic
Jan 26 12:42:55 np0005596060 podman[94567]: 2026-01-26 17:42:55.327619151 +0000 UTC m=+0.026307724 container died 37f4a5abe4244b9a627c995bac1a0993940c4192b74f7d492df2b27246387bfa (image=quay.io/ceph/ceph:v18, name=serene_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:42:55 np0005596060 systemd[1]: libpod-37f4a5abe4244b9a627c995bac1a0993940c4192b74f7d492df2b27246387bfa.scope: Deactivated successfully.
Jan 26 12:42:55 np0005596060 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.wyazzh for d4cd1917-5876-51b6-bc64-65a16199754d...
Jan 26 12:42:55 np0005596060 systemd[1]: var-lib-containers-storage-overlay-463ed71aecc67a63945b4ce0266ec15514149176c533e8e2d0d7059b8be4efbc-merged.mount: Deactivated successfully.
Jan 26 12:42:55 np0005596060 podman[94567]: 2026-01-26 17:42:55.456237222 +0000 UTC m=+0.154925775 container remove 37f4a5abe4244b9a627c995bac1a0993940c4192b74f7d492df2b27246387bfa (image=quay.io/ceph/ceph:v18, name=serene_bhabha, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:42:55 np0005596060 systemd[1]: libpod-conmon-37f4a5abe4244b9a627c995bac1a0993940c4192b74f7d492df2b27246387bfa.scope: Deactivated successfully.
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 new map
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-26T17:42:22.558304+0000#012modified#0112026-01-26T17:42:52.412531+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24157}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.oqvedy{0:24157} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/565974780,v1:192.168.122.102:6805/565974780] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wenkwv{-1:14373} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/1188189847,v1:192.168.122.100:6807/1188189847] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.oxxatt{-1:24149} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] compat {c=[1],r=[1],i=[7ff]}]
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] up:standby
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.oqvedy=up:active} 2 up:standby
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 26 12:42:55 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev f9315e36-6847-4ffb-94e4-6b333ecc408d (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 26 12:42:55 np0005596060 podman[94627]: 2026-01-26 17:42:55.684186298 +0000 UTC m=+0.044579347 container create e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:42:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a29c193046fc83eaa18aa96f19444cbf40d2e2fa18a694c47c2bcb988231850/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:55 np0005596060 podman[94627]: 2026-01-26 17:42:55.742759501 +0000 UTC m=+0.103152560 container init e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:42:55 np0005596060 podman[94627]: 2026-01-26 17:42:55.747081099 +0000 UTC m=+0.107474148 container start e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:42:55 np0005596060 bash[94627]: e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512
Jan 26 12:42:55 np0005596060 podman[94627]: 2026-01-26 17:42:55.663401102 +0000 UTC m=+0.023794161 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 26 12:42:55 np0005596060 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.wyazzh for d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:42:55 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh[94643]: [NOTICE] 025/174255 (2) : New worker #1 (4) forked
Jan 26 12:42:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:42:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 12:42:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:42:55.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 26 12:42:56 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 13 completed events
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:56 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.dyvhne on compute-2
Jan 26 12:42:56 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.dyvhne on compute-2
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:56 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 26 12:42:56 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 26 12:42:56 np0005596060 python3[94682]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 26 12:42:56 np0005596060 podman[94683]: 2026-01-26 17:42:56.660436261 +0000 UTC m=+0.111134288 container create 0a7954b76f0aa8ea8f24989dfffda14dda100fcc186cbb5da8d614891a860caf (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 26 12:42:56 np0005596060 podman[94683]: 2026-01-26 17:42:56.591717186 +0000 UTC m=+0.042415233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:42:56 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 6f76e629-f2f9-45af-8f39-7bfe40048b46 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:42:56 np0005596060 systemd[1]: Started libpod-conmon-0a7954b76f0aa8ea8f24989dfffda14dda100fcc186cbb5da8d614891a860caf.scope.
Jan 26 12:42:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v139: 104 pgs: 104 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 170 KiB/s rd, 8.0 KiB/s wr, 306 op/s
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:42:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:42:56 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:42:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a50f757d24443bce276e4b9150a9832e6019d67ad20c54c0001dbaa59b79ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40a50f757d24443bce276e4b9150a9832e6019d67ad20c54c0001dbaa59b79ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:42:56 np0005596060 podman[94683]: 2026-01-26 17:42:56.869821306 +0000 UTC m=+0.320519343 container init 0a7954b76f0aa8ea8f24989dfffda14dda100fcc186cbb5da8d614891a860caf (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 12:42:56 np0005596060 podman[94683]: 2026-01-26 17:42:56.878549283 +0000 UTC m=+0.329247310 container start 0a7954b76f0aa8ea8f24989dfffda14dda100fcc186cbb5da8d614891a860caf (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:42:56 np0005596060 podman[94683]: 2026-01-26 17:42:56.991200108 +0000 UTC m=+0.441898135 container attach 0a7954b76f0aa8ea8f24989dfffda14dda100fcc186cbb5da8d614891a860caf (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:42:57 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 26 12:42:57 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716641146' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 26 12:42:57 np0005596060 suspicious_poitras[94698]: 
Jan 26 12:42:57 np0005596060 systemd[1]: libpod-0a7954b76f0aa8ea8f24989dfffda14dda100fcc186cbb5da8d614891a860caf.scope: Deactivated successfully.
Jan 26 12:42:57 np0005596060 suspicious_poitras[94698]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":15}}
Jan 26 12:42:57 np0005596060 podman[94683]: 2026-01-26 17:42:57.489591514 +0000 UTC m=+0.940289581 container died 0a7954b76f0aa8ea8f24989dfffda14dda100fcc186cbb5da8d614891a860caf (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: Deploying daemon haproxy.rgw.default.compute-2.dyvhne on compute-2
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:42:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:42:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:42:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:42:57.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:42:57 np0005596060 systemd[1]: var-lib-containers-storage-overlay-40a50f757d24443bce276e4b9150a9832e6019d67ad20c54c0001dbaa59b79ec-merged.mount: Deactivated successfully.
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 26 12:42:57 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 64aeadd0-129c-4971-ac9a-db63e91998a4 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Jan 26 12:42:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:42:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 51 pg[6.0( v 48'39 (0'0,48'39] local-lis/les=21/22 n=22 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=51 pruub=10.372341156s) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 45'38 mlcod 45'38 active pruub 117.062210083s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:42:58 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 51 pg[6.0( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=51 pruub=10.372341156s) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 45'38 mlcod 0'0 unknown pruub 117.062210083s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:58 np0005596060 podman[94683]: 2026-01-26 17:42:58.033430758 +0000 UTC m=+1.484128815 container remove 0a7954b76f0aa8ea8f24989dfffda14dda100fcc186cbb5da8d614891a860caf (image=quay.io/ceph/ceph:v18, name=suspicious_poitras, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:42:58 np0005596060 systemd[1]: libpod-conmon-0a7954b76f0aa8ea8f24989dfffda14dda100fcc186cbb5da8d614891a860caf.scope: Deactivated successfully.
Jan 26 12:42:58 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 26 12:42:58 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 26 12:42:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v141: 150 pgs: 46 unknown, 104 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 113 KiB/s rd, 0 B/s wr, 210 op/s
Jan 26 12:42:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:42:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:42:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:42:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 26 12:42:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:42:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:42:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 26 12:42:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:42:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:42:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 26 12:42:59 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 26 12:42:59 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev e62e8366-4465-4a8a-9e7a-06d6e5fd6ee4 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 26 12:42:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Jan 26 12:42:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.b( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.8( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.c( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.a( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.9( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.e( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.5( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.2( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.f( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.3( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.4( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.7( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.6( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.d( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=21/22 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=21/22 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.8( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.c( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.2( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.0( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 45'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.4( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.a( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 52 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=21/21 les/c/f=22/22/0 sis=51) [1] r=0 lpr=51 pi=[21,51)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:42:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:42:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:42:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:42:59.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:42:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:42:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:42:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:42:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:43:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:00.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Jan 26 12:43:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v143: 181 pgs: 77 unknown, 104 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 128 KiB/s rd, 0 B/s wr, 238 op/s
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 26 12:43:00 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev 79f51859-3298-40af-b37b-b7ff1f4e0fc3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:43:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:00 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 12:43:00 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 12:43:00 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 12:43:00 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 12:43:00 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.alfrff on compute-2
Jan 26 12:43:00 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.alfrff on compute-2
Jan 26 12:43:01 np0005596060 ceph-mgr[74563]: [progress WARNING root] Starting Global Recovery Event,77 pgs not in active + clean state
Jan 26 12:43:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:43:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:43:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:01.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 26 12:43:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:43:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:02.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 26 12:43:02 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev c1520e78-c383-42f5-a227-65f85c5fd180 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:43:02 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 54 pg[8.0( v 40'4 (0'0,40'4] local-lis/les=39/40 n=4 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54 pruub=11.628456116s) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 40'3 mlcod 40'3 active pruub 122.833305359s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:02 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 54 pg[8.0( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54 pruub=11.628456116s) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 40'3 mlcod 0'0 unknown pruub 122.833305359s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: Deploying daemon keepalived.rgw.default.compute-2.alfrff on compute-2
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 26 12:43:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v146: 212 pgs: 31 unknown, 181 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:43:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] update: starting ev c0b23320-d556-49c9-8f35-7a14e8d782fc (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev f9315e36-6847-4ffb-94e4-6b333ecc408d (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event f9315e36-6847-4ffb-94e4-6b333ecc408d (PG autoscaler increasing pool 5 PGs from 1 to 32) in 8 seconds
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 6f76e629-f2f9-45af-8f39-7bfe40048b46 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 6f76e629-f2f9-45af-8f39-7bfe40048b46 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 7 seconds
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 64aeadd0-129c-4971-ac9a-db63e91998a4 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 64aeadd0-129c-4971-ac9a-db63e91998a4 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev e62e8366-4465-4a8a-9e7a-06d6e5fd6ee4 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event e62e8366-4465-4a8a-9e7a-06d6e5fd6ee4 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 79f51859-3298-40af-b37b-b7ff1f4e0fc3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 79f51859-3298-40af-b37b-b7ff1f4e0fc3 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev c1520e78-c383-42f5-a227-65f85c5fd180 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event c1520e78-c383-42f5-a227-65f85c5fd180 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev c0b23320-d556-49c9-8f35-7a14e8d782fc (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 26 12:43:03 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event c0b23320-d556-49c9-8f35-7a14e8d782fc (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1f( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.18( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.17( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.16( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.2( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.10( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.11( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.6( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.12( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.13( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[9.0( v 48'1155 (0'0,48'1155] local-lis/les=41/42 n=177 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=55 pruub=13.110163689s) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 48'1154 mlcod 48'1154 active pruub 125.093551636s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1c( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.5( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1( v 40'4 (0'0,40'4] local-lis/les=39/40 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1d( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1e( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.19( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1a( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1b( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.4( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.7( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.b( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.c( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.d( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.e( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.a( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.9( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.8( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.f( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.3( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.15( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.14( v 40'4 lc 0'0 (0'0,40'4] local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.16( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[9.0( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=55 pruub=13.110163689s) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 48'1154 mlcod 0'0 unknown pruub 125.093551636s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.17( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=54/55 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.13( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1d( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1( v 40'4 (0'0,40'4] local-lis/les=54/55 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.19( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1e( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.1a( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=54/55 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.0( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 40'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.7( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.d( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.a( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.8( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.f( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.3( v 40'4 (0'0,40'4] local-lis/les=54/55 n=1 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.5( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 55 pg[8.e( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [1] r=0 lpr=54 pi=[39,54)/1 crt=40'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:43:03 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:43:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:03.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:04.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 26 12:43:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 26 12:43:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 26 12:43:04 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1e( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.16( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.17( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.11( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.10( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.19( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.3( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.4( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.7( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.13( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.12( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1d( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1c( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1f( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1b( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.18( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.5( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1a( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.6( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.a( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.d( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.c( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.f( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.b( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.8( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.9( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.e( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.2( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.14( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.15( v 48'1155 lc 0'0 (0'0,48'1155] local-lis/les=41/42 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.17( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.16( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.10( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.4( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.3( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.0( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 48'1154 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.13( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.7( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.11( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.12( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1d( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1c( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.18( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.5( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.6( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.1( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.a( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.d( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.c( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.f( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.b( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.8( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.e( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.2( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.9( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.14( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 56 pg[9.15( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=41/41 les/c/f=42/42/0 sis=55) [1] r=0 lpr=55 pi=[41,55)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v149: 274 pgs: 93 unknown, 181 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:43:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 26 12:43:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:05 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Jan 26 12:43:05 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 26 12:43:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 57 pg[11.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=15.787200928s) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active pruub 130.199539185s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 57 pg[11.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57 pruub=15.787200928s) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown pruub 130.199539185s@ mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:05.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 26 12:43:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:06 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 12:43:06 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 12:43:06 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 12:43:06 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 12:43:06 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.erukyj on compute-0
Jan 26 12:43:06 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.erukyj on compute-0
Jan 26 12:43:06 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 20 completed events
Jan 26 12:43:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:43:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:06.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 26 12:43:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 26 12:43:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v151: 305 pgs: 93 unknown, 212 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:43:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 26 12:43:06 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.17( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.16( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.c( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.b( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.a( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.9( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.d( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.e( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.8( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.3( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.7( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.18( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.19( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1d( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1f( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.10( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.11( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.2( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.5( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.6( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.13( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.15( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=46/47 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.0( empty local-lis/les=57/58 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.b( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.c( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.9( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.d( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.18( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.4( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.10( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.11( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.2( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.6( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1f( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.15( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 58 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=46/46 les/c/f=47/47/0 sis=57) [1] r=0 lpr=57 pi=[46,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:07 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 26 12:43:07 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 26 12:43:07 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 26 12:43:07 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:07 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:07 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:07 np0005596060 ceph-mon[74267]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 26 12:43:07 np0005596060 ceph-mon[74267]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 26 12:43:07 np0005596060 ceph-mon[74267]: Deploying daemon keepalived.rgw.default.compute-0.erukyj on compute-0
Jan 26 12:43:07 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:07.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:08.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v153: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 26 12:43:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.471408844s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.639358521s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.493917465s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.661880493s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.17( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.493843079s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.661880493s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.16( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.471323967s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.639358521s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841978073s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.010025024s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.15( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841927528s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.010025024s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841890335s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.010070801s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.237692833s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405944824s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.3( v 40'4 (0'0,40'4] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841764450s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.010025024s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.14( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841815948s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.010070801s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.d( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.237669945s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405944824s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.3( v 40'4 (0'0,40'4] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841741562s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.010025024s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.237460136s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405914307s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.1( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.237436295s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405914307s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.f( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841430664s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009948730s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.f( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841403008s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009948730s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.8( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841349602s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009948730s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.237005234s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405639648s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.8( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841331482s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009948730s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841215134s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009902954s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.7( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.236978531s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405639648s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.9( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.841195107s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009902954s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.493132591s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662002563s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.a( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.840937614s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009811401s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.a( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.493110657s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662002563s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.236568451s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405609131s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492997169s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662048340s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.d( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.840692520s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009750366s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.d( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.840672493s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009750366s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.840587616s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009719849s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.e( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492917061s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662048340s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.3( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.236474037s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405609131s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.c( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.840517998s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009719849s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492825508s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662078857s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.236121178s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405395508s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.a( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.840545654s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009811401s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.f( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492774010s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662078857s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.5( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.236019135s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405395508s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492668152s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662063599s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.840317726s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009719849s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.b( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.840262413s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009719849s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.8( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492603302s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662063599s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492561340s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662124634s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.3( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492537498s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662124634s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.235765457s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405349731s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492648125s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662292480s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.840005875s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009674072s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.235692978s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405349731s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492174149s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662155151s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.7( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492141724s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662155151s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.4( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492623329s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662292480s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.4( v 40'4 (0'0,40'4] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.839975357s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009674072s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.839482307s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009643555s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492112160s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662322998s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.1b( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.839454651s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009643555s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.19( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.839374542s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009613037s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1a( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492089272s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662322998s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492075920s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662322998s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1d( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.492036819s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662322998s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491859436s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662261963s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.19( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.839318275s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009613037s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.19( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491821289s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662261963s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.838778496s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009460449s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491657257s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662353516s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.12( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.838751793s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009460449s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.234810829s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405578613s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.f( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.234776497s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405578613s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.838695526s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009521484s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1e( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491627693s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662353516s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.838317871s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009185791s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.1c( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.838660240s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009521484s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.6( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.838291168s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009185791s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491485596s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662506104s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.181219101s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.352218628s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.5( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491456032s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662506104s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.5( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.838946342s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.010040283s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[6.b( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=14.181138039s) [2] r=-1 lpr=59 pi=[51,59)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.352218628s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.838313103s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009429932s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.2( v 40'4 (0'0,40'4] local-lis/les=54/55 n=1 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.838290215s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009429932s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.5( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.838841438s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.010040283s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491245270s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662460327s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491216660s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662460327s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491333008s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662612915s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.837883949s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009170532s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.10( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.837861061s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009170532s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.13( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491301537s) [2] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662612915s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491176605s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662567139s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.12( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.491154671s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662567139s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.16( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.821009636s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 127.992538452s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.16( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.820987701s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.992538452s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.17( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.837568283s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009185791s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.837804794s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 128.009384155s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.490890503s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662551880s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.11( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.837735176s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009384155s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.17( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.837534904s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 128.009185791s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.14( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.490859985s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662551880s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.490924835s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662643433s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1b( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.490903854s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662643433s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.490625381s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 active pruub 131.662597656s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[11.1c( empty local-lis/les=57/58 n=0 ec=57/46 lis/c=57/57 les/c/f=58/58/0 sis=59 pruub=13.490597725s) [0] r=-1 lpr=59 pi=[57,59)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 131.662597656s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.820702553s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 127.992507935s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.820478439s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 active pruub 127.992485046s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.18( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.820456505s) [0] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.992507935s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[8.1f( v 40'4 (0'0,40'4] local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=59 pruub=9.820340157s) [2] r=-1 lpr=59 pi=[54,59)/1 crt=40'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 127.992485046s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[5.19( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[5.1d( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[5.1e( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[5.17( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[5.6( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[5.a( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[5.14( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[5.3( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[5.c( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[5.5( empty local-lis/les=0/0 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[10.13( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.1e( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[10.18( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.4( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.3( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.10( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.13( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[10.19( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.b( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.8( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[10.5( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.9( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.6( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[10.8( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.2( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[10.2( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.f( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.e( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[10.1b( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.1b( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[7.18( empty local-lis/les=0/0 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[10.15( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 59 pg[10.14( empty local-lis/les=0/0 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:09.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:10.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:10 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 26 12:43:10 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 26 12:43:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 26 12:43:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 26 12:43:11 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 5bb77fda-1b03-439a-af9d-9692c8362592 (Global Recovery Event) in 10 seconds
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:43:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[10.14( v 58'51 lc 45'43 (0'0,58'51] local-lis/les=59/60 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=58'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.10( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.18( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.1e( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[10.15( v 58'51 lc 45'19 (0'0,58'51] local-lis/les=59/60 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=58'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.e( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.9( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[10.1b( v 45'48 (0'0,45'48] local-lis/les=59/60 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.b( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[10.18( v 45'48 (0'0,45'48] local-lis/les=59/60 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.13( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[10.2( v 45'48 (0'0,45'48] local-lis/les=59/60 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[10.19( v 45'48 (0'0,45'48] local-lis/les=59/60 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.f( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.8( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.3( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.4( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[10.8( v 45'48 (0'0,45'48] local-lis/les=59/60 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.2( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.6( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[10.5( v 45'48 (0'0,45'48] local-lis/les=59/60 n=1 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[10.13( v 45'48 (0'0,45'48] local-lis/les=59/60 n=0 ec=55/43 lis/c=55/55 les/c/f=56/56/0 sis=59) [1] r=0 lpr=59 pi=[55,59)/1 crt=45'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[5.19( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[7.1b( empty local-lis/les=59/60 n=0 ec=52/23 lis/c=52/52 les/c/f=53/53/0 sis=59) [1] r=0 lpr=59 pi=[52,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[5.5( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[5.3( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[5.6( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[5.a( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[5.14( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[5.1e( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[5.c( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[5.1d( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 60 pg[5.17( empty local-lis/les=59/60 n=0 ec=51/19 lis/c=51/51 les/c/f=52/52/0 sis=59) [1] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:11 np0005596060 podman[94878]: 2026-01-26 17:43:11.671077476 +0000 UTC m=+5.021908776 container create 9e394268a6aaaef89180816d1a0d1b16b1f030591e751e4df054b1d8c09340db (image=quay.io/ceph/keepalived:2.2.4, name=gracious_tu, description=keepalived for Ceph, io.buildah.version=1.28.2, name=keepalived, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, release=1793, version=2.2.4, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 12:43:11 np0005596060 podman[94878]: 2026-01-26 17:43:11.648809273 +0000 UTC m=+4.999640613 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 26 12:43:11 np0005596060 systemd[1]: Started libpod-conmon-9e394268a6aaaef89180816d1a0d1b16b1f030591e751e4df054b1d8c09340db.scope.
Jan 26 12:43:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:11 np0005596060 podman[94878]: 2026-01-26 17:43:11.770047592 +0000 UTC m=+5.120878882 container init 9e394268a6aaaef89180816d1a0d1b16b1f030591e751e4df054b1d8c09340db (image=quay.io/ceph/keepalived:2.2.4, name=gracious_tu, vcs-type=git, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, release=1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, name=keepalived)
Jan 26 12:43:11 np0005596060 podman[94878]: 2026-01-26 17:43:11.77723373 +0000 UTC m=+5.128065010 container start 9e394268a6aaaef89180816d1a0d1b16b1f030591e751e4df054b1d8c09340db (image=quay.io/ceph/keepalived:2.2.4, name=gracious_tu, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, name=keepalived, build-date=2023-02-22T09:23:20, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 12:43:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:11 np0005596060 gracious_tu[94980]: 0 0
Jan 26 12:43:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:11.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:11 np0005596060 systemd[1]: libpod-9e394268a6aaaef89180816d1a0d1b16b1f030591e751e4df054b1d8c09340db.scope: Deactivated successfully.
Jan 26 12:43:11 np0005596060 podman[94878]: 2026-01-26 17:43:11.795539874 +0000 UTC m=+5.146371164 container attach 9e394268a6aaaef89180816d1a0d1b16b1f030591e751e4df054b1d8c09340db (image=quay.io/ceph/keepalived:2.2.4, name=gracious_tu, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, release=1793, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vendor=Red Hat, Inc., name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 26 12:43:11 np0005596060 podman[94878]: 2026-01-26 17:43:11.796099898 +0000 UTC m=+5.146931188 container died 9e394268a6aaaef89180816d1a0d1b16b1f030591e751e4df054b1d8c09340db (image=quay.io/ceph/keepalived:2.2.4, name=gracious_tu, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vcs-type=git, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1793, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, description=keepalived for Ceph, io.buildah.version=1.28.2)
Jan 26 12:43:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-56702354858ffd93879ba8eb6ee59dc6719b08c9013acc3525c58d41106bacc4-merged.mount: Deactivated successfully.
Jan 26 12:43:11 np0005596060 podman[94878]: 2026-01-26 17:43:11.880992904 +0000 UTC m=+5.231824184 container remove 9e394268a6aaaef89180816d1a0d1b16b1f030591e751e4df054b1d8c09340db (image=quay.io/ceph/keepalived:2.2.4, name=gracious_tu, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, name=keepalived, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=keepalived for Ceph, io.openshift.expose-services=, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, distribution-scope=public, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 26 12:43:11 np0005596060 systemd[1]: libpod-conmon-9e394268a6aaaef89180816d1a0d1b16b1f030591e751e4df054b1d8c09340db.scope: Deactivated successfully.
Jan 26 12:43:11 np0005596060 systemd[1]: Reloading.
Jan 26 12:43:12 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:43:12 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:43:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:12.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:12 np0005596060 systemd[1]: Reloading.
Jan 26 12:43:12 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:43:12 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 26 12:43:12 np0005596060 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.erukyj for d4cd1917-5876-51b6-bc64-65a16199754d...
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 26 12:43:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 61 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=11.125485420s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405654907s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 61 pg[6.6( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=11.125433922s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405654907s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 61 pg[6.2( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=11.124814034s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405364990s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 61 pg[6.2( v 48'39 (0'0,48'39] local-lis/les=51/52 n=2 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=11.124756813s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405364990s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 61 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=11.124586105s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405303955s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 61 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=11.124561310s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405303955s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 61 pg[6.a( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=11.125036240s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 132.405914307s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 61 pg[6.a( v 48'39 (0'0,48'39] local-lis/les=51/52 n=1 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=61 pruub=11.125008583s) [0] r=-1 lpr=61 pi=[51,61)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 132.405914307s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 26 12:43:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 1 active+recovery_wait, 4 active+recovery_wait+degraded, 34 peering, 1 active+recovering, 265 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 8/213 objects degraded (3.756%); 2/213 objects misplaced (0.939%); 5 B/s, 1 keys/s, 1 objects/s recovering
Jan 26 12:43:12 np0005596060 podman[95129]: 2026-01-26 17:43:12.793290834 +0000 UTC m=+0.043667294 container create 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, description=keepalived for Ceph, version=2.2.4, build-date=2023-02-22T09:23:20, name=keepalived, com.redhat.component=keepalived-container, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 26 12:43:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea40ce5d3aa974a7c94e5f4576f39715afa64f8a9632dde362c60bc9ae7c2a5b/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:12 np0005596060 podman[95129]: 2026-01-26 17:43:12.852991994 +0000 UTC m=+0.103368474 container init 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived)
Jan 26 12:43:12 np0005596060 podman[95129]: 2026-01-26 17:43:12.858239734 +0000 UTC m=+0.108616204 container start 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, version=2.2.4, vcs-type=git)
Jan 26 12:43:12 np0005596060 bash[95129]: 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5
Jan 26 12:43:12 np0005596060 podman[95129]: 2026-01-26 17:43:12.770052597 +0000 UTC m=+0.020429117 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 26 12:43:12 np0005596060 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.erukyj for d4cd1917-5876-51b6-bc64-65a16199754d.
Jan 26 12:43:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 17:43:12 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 26 12:43:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 17:43:12 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 26 12:43:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 17:43:12 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 26 12:43:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 17:43:12 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 26 12:43:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 17:43:12 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 26 12:43:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 17:43:12 2026: Starting VRRP child process, pid=4
Jan 26 12:43:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 17:43:12 2026: Startup complete
Jan 26 12:43:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 17:43:12 2026: (VI_0) Entering BACKUP STATE (init)
Jan 26 12:43:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 17:43:12 2026: VRRP_Script(check_backend) succeeded
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:12 np0005596060 ceph-mgr[74563]: [progress INFO root] complete: finished ev 7a082bae-676a-4392-8f97-159eb655d715 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 26 12:43:12 np0005596060 ceph-mgr[74563]: [progress INFO root] Completed event 7a082bae-676a-4392-8f97-159eb655d715 (Updating ingress.rgw.default deployment (+4 -> 4)) in 21 seconds
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 26 12:43:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 26 12:43:13 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 8/213 objects degraded (3.756%), 4 pgs degraded (PG_DEGRADED)
Jan 26 12:43:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 26 12:43:13 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 26 12:43:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 26 12:43:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 26 12:43:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:43:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:14.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:14 np0005596060 podman[95422]: 2026-01-26 17:43:14.259836628 +0000 UTC m=+0.057964879 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:14 np0005596060 podman[95422]: 2026-01-26 17:43:14.352428375 +0000 UTC m=+0.150556626 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 26 12:43:14 np0005596060 ceph-mon[74267]: Health check failed: Degraded data redundancy: 8/213 objects degraded (3.756%), 4 pgs degraded (PG_DEGRADED)
Jan 26 12:43:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v160: 305 pgs: 1 active+recovery_wait, 4 active+recovery_wait+degraded, 34 peering, 1 active+recovering, 265 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 8/213 objects degraded (3.756%); 2/213 objects misplaced (0.939%); 6 B/s, 1 keys/s, 1 objects/s recovering
Jan 26 12:43:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:43:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:43:15 np0005596060 podman[95581]: 2026-01-26 17:43:15.088351005 +0000 UTC m=+0.073686779 container exec e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:43:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:15 np0005596060 podman[95581]: 2026-01-26 17:43:15.103821369 +0000 UTC m=+0.089157083 container exec_died e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:43:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:15 np0005596060 podman[95646]: 2026-01-26 17:43:15.42585684 +0000 UTC m=+0.080716854 container exec 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, io.openshift.tags=Ceph keepalived, architecture=x86_64, com.redhat.component=keepalived-container, vcs-type=git, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, name=keepalived, vendor=Red Hat, Inc.)
Jan 26 12:43:15 np0005596060 podman[95646]: 2026-01-26 17:43:15.452817249 +0000 UTC m=+0.107677293 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, architecture=x86_64, vcs-type=git, description=keepalived for Ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2)
Jan 26 12:43:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:43:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:15.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:43:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7876be1a-1fdd-4230-92c6-23275120d47b does not exist
Jan 26 12:43:16 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9c7b07aa-7822-48ef-9404-fc819c6b983a does not exist
Jan 26 12:43:16 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev a029bf7c-39b5-48d9-9256-71014b0ccd38 does not exist
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:43:16 np0005596060 ceph-mgr[74563]: [progress INFO root] Writing back 22 completed events
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 26 12:43:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:16.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:16 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 17:43:16 2026: (VI_0) Entering MASTER STATE
Jan 26 12:43:16 np0005596060 podman[95820]: 2026-01-26 17:43:16.656107224 +0000 UTC m=+0.067369072 container create fbaf05edadb4d99b5c1f9229f86920fc886126d2a5adae82b4f53778aa592d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclaren, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 12:43:16 np0005596060 systemd[1]: Started libpod-conmon-fbaf05edadb4d99b5c1f9229f86920fc886126d2a5adae82b4f53778aa592d48.scope.
Jan 26 12:43:16 np0005596060 podman[95820]: 2026-01-26 17:43:16.631047423 +0000 UTC m=+0.042309251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:16 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:16 np0005596060 podman[95820]: 2026-01-26 17:43:16.756888905 +0000 UTC m=+0.168150803 container init fbaf05edadb4d99b5c1f9229f86920fc886126d2a5adae82b4f53778aa592d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclaren, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:16 np0005596060 podman[95820]: 2026-01-26 17:43:16.767815796 +0000 UTC m=+0.179077644 container start fbaf05edadb4d99b5c1f9229f86920fc886126d2a5adae82b4f53778aa592d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclaren, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 12:43:16 np0005596060 podman[95820]: 2026-01-26 17:43:16.771702562 +0000 UTC m=+0.182964410 container attach fbaf05edadb4d99b5c1f9229f86920fc886126d2a5adae82b4f53778aa592d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclaren, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 12:43:16 np0005596060 crazy_mclaren[95836]: 167 167
Jan 26 12:43:16 np0005596060 systemd[1]: libpod-fbaf05edadb4d99b5c1f9229f86920fc886126d2a5adae82b4f53778aa592d48.scope: Deactivated successfully.
Jan 26 12:43:16 np0005596060 podman[95820]: 2026-01-26 17:43:16.773707112 +0000 UTC m=+0.184968920 container died fbaf05edadb4d99b5c1f9229f86920fc886126d2a5adae82b4f53778aa592d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclaren, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:43:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 1 active+recovery_wait, 4 active+recovery_wait+degraded, 1 active+recovering, 299 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 8/213 objects degraded (3.756%); 2/213 objects misplaced (0.939%); 221 B/s, 1 keys/s, 1 objects/s recovering
Jan 26 12:43:16 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a9f658d6cc67527f0f9c21f2f4c43e085deed4dbfc33d43cd155f9db3f0f5c06-merged.mount: Deactivated successfully.
Jan 26 12:43:16 np0005596060 podman[95820]: 2026-01-26 17:43:16.825011595 +0000 UTC m=+0.236273413 container remove fbaf05edadb4d99b5c1f9229f86920fc886126d2a5adae82b4f53778aa592d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:43:16 np0005596060 systemd[1]: libpod-conmon-fbaf05edadb4d99b5c1f9229f86920fc886126d2a5adae82b4f53778aa592d48.scope: Deactivated successfully.
Jan 26 12:43:17 np0005596060 podman[95862]: 2026-01-26 17:43:17.007942844 +0000 UTC m=+0.046531805 container create 734f1f731515a43fcd9bbfbff55e28d2ceb4d4e0e9073887b20c357e81f924c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:43:17 np0005596060 systemd[1]: Started libpod-conmon-734f1f731515a43fcd9bbfbff55e28d2ceb4d4e0e9073887b20c357e81f924c9.scope.
Jan 26 12:43:17 np0005596060 podman[95862]: 2026-01-26 17:43:16.987793874 +0000 UTC m=+0.026382865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:17 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2287d60459d066e31319db0a3a84217432f37b91b6dc1188f9c90209a9deb00b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2287d60459d066e31319db0a3a84217432f37b91b6dc1188f9c90209a9deb00b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2287d60459d066e31319db0a3a84217432f37b91b6dc1188f9c90209a9deb00b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2287d60459d066e31319db0a3a84217432f37b91b6dc1188f9c90209a9deb00b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2287d60459d066e31319db0a3a84217432f37b91b6dc1188f9c90209a9deb00b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:17 np0005596060 podman[95862]: 2026-01-26 17:43:17.122901247 +0000 UTC m=+0.161490258 container init 734f1f731515a43fcd9bbfbff55e28d2ceb4d4e0e9073887b20c357e81f924c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:43:17 np0005596060 podman[95862]: 2026-01-26 17:43:17.134276189 +0000 UTC m=+0.172865150 container start 734f1f731515a43fcd9bbfbff55e28d2ceb4d4e0e9073887b20c357e81f924c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:43:17 np0005596060 podman[95862]: 2026-01-26 17:43:17.139035377 +0000 UTC m=+0.177624378 container attach 734f1f731515a43fcd9bbfbff55e28d2ceb4d4e0e9073887b20c357e81f924c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 12:43:17 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:17.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:17 np0005596060 friendly_merkle[95879]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:43:17 np0005596060 friendly_merkle[95879]: --> relative data size: 1.0
Jan 26 12:43:17 np0005596060 friendly_merkle[95879]: --> All data devices are unavailable
Jan 26 12:43:17 np0005596060 systemd[1]: libpod-734f1f731515a43fcd9bbfbff55e28d2ceb4d4e0e9073887b20c357e81f924c9.scope: Deactivated successfully.
Jan 26 12:43:17 np0005596060 podman[95862]: 2026-01-26 17:43:17.983048949 +0000 UTC m=+1.021637910 container died 734f1f731515a43fcd9bbfbff55e28d2ceb4d4e0e9073887b20c357e81f924c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:43:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2287d60459d066e31319db0a3a84217432f37b91b6dc1188f9c90209a9deb00b-merged.mount: Deactivated successfully.
Jan 26 12:43:18 np0005596060 podman[95862]: 2026-01-26 17:43:18.056584303 +0000 UTC m=+1.095173264 container remove 734f1f731515a43fcd9bbfbff55e28d2ceb4d4e0e9073887b20c357e81f924c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_merkle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 12:43:18 np0005596060 systemd[1]: libpod-conmon-734f1f731515a43fcd9bbfbff55e28d2ceb4d4e0e9073887b20c357e81f924c9.scope: Deactivated successfully.
Jan 26 12:43:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:18.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 298 B/s, 1 keys/s, 2 objects/s recovering
Jan 26 12:43:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 26 12:43:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 26 12:43:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 26 12:43:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 26 12:43:18 np0005596060 podman[96045]: 2026-01-26 17:43:18.814841217 +0000 UTC m=+0.043498440 container create fb7ce843b0e347e917e09ab83d627f5d5bc8a6849fe67f3899588313b4e0da77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 12:43:18 np0005596060 systemd[1]: Started libpod-conmon-fb7ce843b0e347e917e09ab83d627f5d5bc8a6849fe67f3899588313b4e0da77.scope.
Jan 26 12:43:18 np0005596060 podman[96045]: 2026-01-26 17:43:18.795066067 +0000 UTC m=+0.023723300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:18 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:18 np0005596060 podman[96045]: 2026-01-26 17:43:18.908927782 +0000 UTC m=+0.137584995 container init fb7ce843b0e347e917e09ab83d627f5d5bc8a6849fe67f3899588313b4e0da77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:43:18 np0005596060 podman[96045]: 2026-01-26 17:43:18.916414828 +0000 UTC m=+0.145072051 container start fb7ce843b0e347e917e09ab83d627f5d5bc8a6849fe67f3899588313b4e0da77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:18 np0005596060 podman[96045]: 2026-01-26 17:43:18.920255463 +0000 UTC m=+0.148912676 container attach fb7ce843b0e347e917e09ab83d627f5d5bc8a6849fe67f3899588313b4e0da77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 26 12:43:18 np0005596060 pensive_austin[96060]: 167 167
Jan 26 12:43:18 np0005596060 systemd[1]: libpod-fb7ce843b0e347e917e09ab83d627f5d5bc8a6849fe67f3899588313b4e0da77.scope: Deactivated successfully.
Jan 26 12:43:18 np0005596060 conmon[96060]: conmon fb7ce843b0e347e917e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb7ce843b0e347e917e09ab83d627f5d5bc8a6849fe67f3899588313b4e0da77.scope/container/memory.events
Jan 26 12:43:18 np0005596060 podman[96045]: 2026-01-26 17:43:18.92256057 +0000 UTC m=+0.151217783 container died fb7ce843b0e347e917e09ab83d627f5d5bc8a6849fe67f3899588313b4e0da77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_austin, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 12:43:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-39c5c38676de93bd496da7dd2e950fbf26f7007da295b34c53fca9719606cf6c-merged.mount: Deactivated successfully.
Jan 26 12:43:18 np0005596060 podman[96045]: 2026-01-26 17:43:18.957980519 +0000 UTC m=+0.186637732 container remove fb7ce843b0e347e917e09ab83d627f5d5bc8a6849fe67f3899588313b4e0da77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_austin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 12:43:18 np0005596060 systemd[1]: libpod-conmon-fb7ce843b0e347e917e09ab83d627f5d5bc8a6849fe67f3899588313b4e0da77.scope: Deactivated successfully.
Jan 26 12:43:19 np0005596060 podman[96084]: 2026-01-26 17:43:19.158761081 +0000 UTC m=+0.060418580 container create 479a3ff32cd254919b04a9184dc19aa98ade6b3e7be4029a0eaeec9e6903c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_edison, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 26 12:43:19 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 8/213 objects degraded (3.756%), 4 pgs degraded)
Jan 26 12:43:19 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 26 12:43:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 26 12:43:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 26 12:43:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 26 12:43:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 26 12:43:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 26 12:43:19 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 26 12:43:19 np0005596060 systemd[1]: Started libpod-conmon-479a3ff32cd254919b04a9184dc19aa98ade6b3e7be4029a0eaeec9e6903c568.scope.
Jan 26 12:43:19 np0005596060 podman[96084]: 2026-01-26 17:43:19.137303228 +0000 UTC m=+0.038960737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:19 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47dd8a7c6ed60e2aeb5459b2c233b887cd39eb535c8243d2ef70a19aa64aee6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47dd8a7c6ed60e2aeb5459b2c233b887cd39eb535c8243d2ef70a19aa64aee6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47dd8a7c6ed60e2aeb5459b2c233b887cd39eb535c8243d2ef70a19aa64aee6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47dd8a7c6ed60e2aeb5459b2c233b887cd39eb535c8243d2ef70a19aa64aee6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:19 np0005596060 podman[96084]: 2026-01-26 17:43:19.275943628 +0000 UTC m=+0.177601147 container init 479a3ff32cd254919b04a9184dc19aa98ade6b3e7be4029a0eaeec9e6903c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:43:19 np0005596060 podman[96084]: 2026-01-26 17:43:19.289118115 +0000 UTC m=+0.190775604 container start 479a3ff32cd254919b04a9184dc19aa98ade6b3e7be4029a0eaeec9e6903c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_edison, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:43:19 np0005596060 podman[96084]: 2026-01-26 17:43:19.292914419 +0000 UTC m=+0.194571908 container attach 479a3ff32cd254919b04a9184dc19aa98ade6b3e7be4029a0eaeec9e6903c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_edison, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 12:43:19 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Jan 26 12:43:19 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Jan 26 12:43:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:19.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]: {
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:    "1": [
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:        {
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "devices": [
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "/dev/loop3"
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            ],
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "lv_name": "ceph_lv0",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "lv_size": "7511998464",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "name": "ceph_lv0",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "tags": {
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.cluster_name": "ceph",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.crush_device_class": "",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.encrypted": "0",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.osd_id": "1",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.type": "block",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:                "ceph.vdo": "0"
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            },
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "type": "block",
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:            "vg_name": "ceph_vg0"
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:        }
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]:    ]
Jan 26 12:43:20 np0005596060 vigorous_edison[96101]: }
Jan 26 12:43:20 np0005596060 systemd[1]: libpod-479a3ff32cd254919b04a9184dc19aa98ade6b3e7be4029a0eaeec9e6903c568.scope: Deactivated successfully.
Jan 26 12:43:20 np0005596060 podman[96084]: 2026-01-26 17:43:20.088830618 +0000 UTC m=+0.990488107 container died 479a3ff32cd254919b04a9184dc19aa98ade6b3e7be4029a0eaeec9e6903c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 12:43:20 np0005596060 systemd[1]: var-lib-containers-storage-overlay-47dd8a7c6ed60e2aeb5459b2c233b887cd39eb535c8243d2ef70a19aa64aee6d-merged.mount: Deactivated successfully.
Jan 26 12:43:20 np0005596060 podman[96084]: 2026-01-26 17:43:20.14455987 +0000 UTC m=+1.046217359 container remove 479a3ff32cd254919b04a9184dc19aa98ade6b3e7be4029a0eaeec9e6903c568 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_edison, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 12:43:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:20.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:20 np0005596060 systemd[1]: libpod-conmon-479a3ff32cd254919b04a9184dc19aa98ade6b3e7be4029a0eaeec9e6903c568.scope: Deactivated successfully.
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 8/213 objects degraded (3.756%), 4 pgs degraded)
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: Cluster is now healthy
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 63 pg[9.b( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.225776672s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 137.129409790s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 63 pg[9.f( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.225749969s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 137.129394531s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 63 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.224844933s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 137.129226685s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 64 pg[9.b( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.225119591s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.129409790s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 64 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.224623680s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.129226685s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 63 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.224158287s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 137.129150391s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 64 pg[9.f( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.225064278s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.129394531s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 63 pg[9.7( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.223577499s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 137.128875732s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 63 pg[9.3( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.223327637s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 137.128692627s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 63 pg[9.13( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.223586082s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 137.128845215s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 64 pg[9.3( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.223235130s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.128692627s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 63 pg[9.17( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.189380646s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 137.095001221s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 64 pg[9.7( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.223393440s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.128875732s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 64 pg[9.17( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.189253807s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.095001221s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 64 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.223840714s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.129150391s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 64 pg[9.13( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=63 pruub=8.222492218s) [2] r=-1 lpr=63 pi=[55,63)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 137.128845215s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v165: 305 pgs: 305 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 304 B/s, 1 objects/s recovering
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 26 12:43:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 26 12:43:20 np0005596060 podman[96263]: 2026-01-26 17:43:20.863286473 +0000 UTC m=+0.066755297 container create 057d3a149016d3953e341316b52f93fa5cb121aedd42fca57244b4429313f614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hoover, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 12:43:20 np0005596060 systemd[1]: Started libpod-conmon-057d3a149016d3953e341316b52f93fa5cb121aedd42fca57244b4429313f614.scope.
Jan 26 12:43:20 np0005596060 podman[96263]: 2026-01-26 17:43:20.83616692 +0000 UTC m=+0.039635804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:20 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:20 np0005596060 podman[96263]: 2026-01-26 17:43:20.963435218 +0000 UTC m=+0.166904122 container init 057d3a149016d3953e341316b52f93fa5cb121aedd42fca57244b4429313f614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 26 12:43:20 np0005596060 podman[96263]: 2026-01-26 17:43:20.971641781 +0000 UTC m=+0.175110555 container start 057d3a149016d3953e341316b52f93fa5cb121aedd42fca57244b4429313f614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hoover, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 12:43:20 np0005596060 podman[96263]: 2026-01-26 17:43:20.974983604 +0000 UTC m=+0.178452408 container attach 057d3a149016d3953e341316b52f93fa5cb121aedd42fca57244b4429313f614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hoover, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 12:43:20 np0005596060 tender_hoover[96280]: 167 167
Jan 26 12:43:20 np0005596060 systemd[1]: libpod-057d3a149016d3953e341316b52f93fa5cb121aedd42fca57244b4429313f614.scope: Deactivated successfully.
Jan 26 12:43:20 np0005596060 podman[96263]: 2026-01-26 17:43:20.976314387 +0000 UTC m=+0.179783181 container died 057d3a149016d3953e341316b52f93fa5cb121aedd42fca57244b4429313f614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 12:43:20 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6f7f724bd80ab5341585ae90bbe297c34e5b36ead74f3d8c52916c6fa22c1ae8-merged.mount: Deactivated successfully.
Jan 26 12:43:21 np0005596060 podman[96263]: 2026-01-26 17:43:21.010053035 +0000 UTC m=+0.213521819 container remove 057d3a149016d3953e341316b52f93fa5cb121aedd42fca57244b4429313f614 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 12:43:21 np0005596060 systemd[1]: libpod-conmon-057d3a149016d3953e341316b52f93fa5cb121aedd42fca57244b4429313f614.scope: Deactivated successfully.
Jan 26 12:43:21 np0005596060 podman[96304]: 2026-01-26 17:43:21.187666832 +0000 UTC m=+0.058692128 container create 897680f6a1bb4413f9131f5aeb79564eac8d7abe3d98ed87be830c46d8e7ca54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:43:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 26 12:43:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 26 12:43:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 26 12:43:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 26 12:43:21 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 26 12:43:21 np0005596060 systemd[1]: Started libpod-conmon-897680f6a1bb4413f9131f5aeb79564eac8d7abe3d98ed87be830c46d8e7ca54.scope.
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.13( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.17( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.3( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.17( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.13( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.3( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.f( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.f( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.7( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.b( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.7( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 65 pg[9.b( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 26 12:43:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 26 12:43:21 np0005596060 podman[96304]: 2026-01-26 17:43:21.159366699 +0000 UTC m=+0.030392045 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:21 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e2f1fc2b9cdffb44ab9eae9fe682f075a89021a584f6d3fa7fc57575274e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e2f1fc2b9cdffb44ab9eae9fe682f075a89021a584f6d3fa7fc57575274e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e2f1fc2b9cdffb44ab9eae9fe682f075a89021a584f6d3fa7fc57575274e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b1e2f1fc2b9cdffb44ab9eae9fe682f075a89021a584f6d3fa7fc57575274e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:21 np0005596060 podman[96304]: 2026-01-26 17:43:21.296623845 +0000 UTC m=+0.167649231 container init 897680f6a1bb4413f9131f5aeb79564eac8d7abe3d98ed87be830c46d8e7ca54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 12:43:21 np0005596060 podman[96304]: 2026-01-26 17:43:21.308463979 +0000 UTC m=+0.179489265 container start 897680f6a1bb4413f9131f5aeb79564eac8d7abe3d98ed87be830c46d8e7ca54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:43:21 np0005596060 podman[96304]: 2026-01-26 17:43:21.317414541 +0000 UTC m=+0.188439897 container attach 897680f6a1bb4413f9131f5aeb79564eac8d7abe3d98ed87be830c46d8e7ca54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galileo, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 26 12:43:21 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 26 12:43:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:21.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:22 np0005596060 nice_galileo[96321]: {
Jan 26 12:43:22 np0005596060 nice_galileo[96321]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:43:22 np0005596060 nice_galileo[96321]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:43:22 np0005596060 nice_galileo[96321]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:43:22 np0005596060 nice_galileo[96321]:        "osd_id": 1,
Jan 26 12:43:22 np0005596060 nice_galileo[96321]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:43:22 np0005596060 nice_galileo[96321]:        "type": "bluestore"
Jan 26 12:43:22 np0005596060 nice_galileo[96321]:    }
Jan 26 12:43:22 np0005596060 nice_galileo[96321]: }
Jan 26 12:43:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:22.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:22 np0005596060 systemd[1]: libpod-897680f6a1bb4413f9131f5aeb79564eac8d7abe3d98ed87be830c46d8e7ca54.scope: Deactivated successfully.
Jan 26 12:43:22 np0005596060 podman[96304]: 2026-01-26 17:43:22.16092007 +0000 UTC m=+1.031945356 container died 897680f6a1bb4413f9131f5aeb79564eac8d7abe3d98ed87be830c46d8e7ca54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 12:43:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7b1e2f1fc2b9cdffb44ab9eae9fe682f075a89021a584f6d3fa7fc57575274e2-merged.mount: Deactivated successfully.
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 26 12:43:22 np0005596060 podman[96304]: 2026-01-26 17:43:22.225419771 +0000 UTC m=+1.096445057 container remove 897680f6a1bb4413f9131f5aeb79564eac8d7abe3d98ed87be830c46d8e7ca54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galileo, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 26 12:43:22 np0005596060 systemd[1]: libpod-conmon-897680f6a1bb4413f9131f5aeb79564eac8d7abe3d98ed87be830c46d8e7ca54.scope: Deactivated successfully.
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 26 12:43:22 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 66 pg[9.b( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:22 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 66 pg[9.f( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:22 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 66 pg[9.7( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:22 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 66 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:22 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 66 pg[9.17( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:22 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 66 pg[9.3( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:22 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 66 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:22 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 66 pg[9.13( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[55,65)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:22 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 539b6ed8-8e3a-407d-a283-e2d79fedc665 does not exist
Jan 26 12:43:22 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e8f9aca4-915c-4f5b-9dd4-73bacaa728d5 does not exist
Jan 26 12:43:22 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d5a9f6c5-299f-4e1c-8ad3-a677babc929a does not exist
Jan 26 12:43:22 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 26 12:43:22 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:22 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 12:43:22 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 12:43:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v168: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 6 active+recovery_wait+remapped, 297 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 34/215 objects misplaced (15.814%); 186 B/s, 2 keys/s, 2 objects/s recovering
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.b( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.977296829s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 146.936614990s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.b( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.977222443s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.936614990s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.13( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.977540970s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 146.936950684s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.f( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.977338791s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 146.936828613s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.13( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.977456093s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.936950684s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.f( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.977227211s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.936828613s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.977308273s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 146.936981201s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.977269173s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.936981201s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.7( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.976840019s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 146.936904907s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.7( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.976802826s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.936904907s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.976854324s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 146.937057495s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.976813316s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.937057495s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.17( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.976679802s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 146.936996460s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.17( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=5 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.976649284s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.936996460s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.3( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.976692200s) [2] async=[2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 146.937057495s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:23 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 67 pg[9.3( v 48'1155 (0'0,48'1155] local-lis/les=65/66 n=6 ec=55/41 lis/c=65/55 les/c/f=66/56/0 sis=67 pruub=14.976529121s) [2] r=-1 lpr=67 pi=[55,67)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 146.937057495s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:43:23 np0005596060 podman[96520]: 2026-01-26 17:43:23.30068546 +0000 UTC m=+0.036797564 container create 786a86f0380ee800832060d87a512669c48494725ca8aa57bfdac7e56d431a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:43:23 np0005596060 systemd[1]: Started libpod-conmon-786a86f0380ee800832060d87a512669c48494725ca8aa57bfdac7e56d431a97.scope.
Jan 26 12:43:23 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:23 np0005596060 podman[96520]: 2026-01-26 17:43:23.381610248 +0000 UTC m=+0.117722362 container init 786a86f0380ee800832060d87a512669c48494725ca8aa57bfdac7e56d431a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:43:23 np0005596060 podman[96520]: 2026-01-26 17:43:23.285051912 +0000 UTC m=+0.021164036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:23 np0005596060 podman[96520]: 2026-01-26 17:43:23.391290238 +0000 UTC m=+0.127402362 container start 786a86f0380ee800832060d87a512669c48494725ca8aa57bfdac7e56d431a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 12:43:23 np0005596060 podman[96520]: 2026-01-26 17:43:23.394738074 +0000 UTC m=+0.130850198 container attach 786a86f0380ee800832060d87a512669c48494725ca8aa57bfdac7e56d431a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:43:23 np0005596060 musing_bell[96536]: 167 167
Jan 26 12:43:23 np0005596060 systemd[1]: libpod-786a86f0380ee800832060d87a512669c48494725ca8aa57bfdac7e56d431a97.scope: Deactivated successfully.
Jan 26 12:43:23 np0005596060 podman[96520]: 2026-01-26 17:43:23.398680072 +0000 UTC m=+0.134792196 container died 786a86f0380ee800832060d87a512669c48494725ca8aa57bfdac7e56d431a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:43:23 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b8d045d6343f0d4f7ebc56bf3477dcf2b4f057a80db46e80a0aec390239790c7-merged.mount: Deactivated successfully.
Jan 26 12:43:23 np0005596060 podman[96520]: 2026-01-26 17:43:23.43527164 +0000 UTC m=+0.171383744 container remove 786a86f0380ee800832060d87a512669c48494725ca8aa57bfdac7e56d431a97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:43:23 np0005596060 systemd[1]: libpod-conmon-786a86f0380ee800832060d87a512669c48494725ca8aa57bfdac7e56d431a97.scope: Deactivated successfully.
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:23 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.mbryrf (monmap changed)...
Jan 26 12:43:23 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.mbryrf (monmap changed)...
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.mbryrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mbryrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:23 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.mbryrf on compute-0
Jan 26 12:43:23 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.mbryrf on compute-0
Jan 26 12:43:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:23.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:23 np0005596060 podman[96672]: 2026-01-26 17:43:23.964742226 +0000 UTC m=+0.038886946 container create fc962bdbbfc2f3da862215a71f2c4f9a81123df2c91b5ddd7e4062ca1ca17ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:43:23 np0005596060 systemd[1]: Started libpod-conmon-fc962bdbbfc2f3da862215a71f2c4f9a81123df2c91b5ddd7e4062ca1ca17ef6.scope.
Jan 26 12:43:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:24 np0005596060 podman[96672]: 2026-01-26 17:43:24.036483946 +0000 UTC m=+0.110628676 container init fc962bdbbfc2f3da862215a71f2c4f9a81123df2c91b5ddd7e4062ca1ca17ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lewin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 12:43:24 np0005596060 podman[96672]: 2026-01-26 17:43:24.042712291 +0000 UTC m=+0.116857011 container start fc962bdbbfc2f3da862215a71f2c4f9a81123df2c91b5ddd7e4062ca1ca17ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:43:24 np0005596060 podman[96672]: 2026-01-26 17:43:23.949110838 +0000 UTC m=+0.023255578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:24 np0005596060 pedantic_lewin[96688]: 167 167
Jan 26 12:43:24 np0005596060 podman[96672]: 2026-01-26 17:43:24.046769181 +0000 UTC m=+0.120913901 container attach fc962bdbbfc2f3da862215a71f2c4f9a81123df2c91b5ddd7e4062ca1ca17ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lewin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:43:24 np0005596060 systemd[1]: libpod-fc962bdbbfc2f3da862215a71f2c4f9a81123df2c91b5ddd7e4062ca1ca17ef6.scope: Deactivated successfully.
Jan 26 12:43:24 np0005596060 podman[96672]: 2026-01-26 17:43:24.048236818 +0000 UTC m=+0.122381538 container died fc962bdbbfc2f3da862215a71f2c4f9a81123df2c91b5ddd7e4062ca1ca17ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lewin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 12:43:24 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 12:43:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c1b2f78fa5300d1a9254b4a3270b5663fdcf98c3e7f629a068275fd444680e99-merged.mount: Deactivated successfully.
Jan 26 12:43:24 np0005596060 podman[96672]: 2026-01-26 17:43:24.095259724 +0000 UTC m=+0.169404444 container remove fc962bdbbfc2f3da862215a71f2c4f9a81123df2c91b5ddd7e4062ca1ca17ef6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lewin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 12:43:24 np0005596060 systemd[1]: libpod-conmon-fc962bdbbfc2f3da862215a71f2c4f9a81123df2c91b5ddd7e4062ca1ca17ef6.scope: Deactivated successfully.
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:43:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:24.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:24 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 26 12:43:24 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:24 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 26 12:43:24 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: Reconfiguring mgr.compute-0.mbryrf (monmap changed)...
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.mbryrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: Reconfiguring daemon mgr.compute-0.mbryrf on compute-0
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 12:43:24 np0005596060 podman[96823]: 2026-01-26 17:43:24.713909454 +0000 UTC m=+0.061814854 container create 1ac33c76de65e4c4d2afc79af305f69bc983c175171588435e38887679a9135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:43:24 np0005596060 podman[96823]: 2026-01-26 17:43:24.675504982 +0000 UTC m=+0.023410462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:24 np0005596060 systemd[1]: Started libpod-conmon-1ac33c76de65e4c4d2afc79af305f69bc983c175171588435e38887679a9135b.scope.
Jan 26 12:43:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 6 active+recovery_wait+remapped, 297 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 34/215 objects misplaced (15.814%); 186 B/s, 2 keys/s, 2 objects/s recovering
Jan 26 12:43:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:24 np0005596060 podman[96823]: 2026-01-26 17:43:24.869377922 +0000 UTC m=+0.217283342 container init 1ac33c76de65e4c4d2afc79af305f69bc983c175171588435e38887679a9135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:43:24 np0005596060 podman[96823]: 2026-01-26 17:43:24.873894574 +0000 UTC m=+0.221799974 container start 1ac33c76de65e4c4d2afc79af305f69bc983c175171588435e38887679a9135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 12:43:24 np0005596060 funny_rubin[96840]: 167 167
Jan 26 12:43:24 np0005596060 systemd[1]: libpod-1ac33c76de65e4c4d2afc79af305f69bc983c175171588435e38887679a9135b.scope: Deactivated successfully.
Jan 26 12:43:24 np0005596060 podman[96823]: 2026-01-26 17:43:24.915192279 +0000 UTC m=+0.263097699 container attach 1ac33c76de65e4c4d2afc79af305f69bc983c175171588435e38887679a9135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 12:43:24 np0005596060 podman[96823]: 2026-01-26 17:43:24.915555118 +0000 UTC m=+0.263460518 container died 1ac33c76de65e4c4d2afc79af305f69bc983c175171588435e38887679a9135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:43:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-36411d1087e93efe96497b0e92bbdc0c8abc732ef1216d7fc7c5e34cceabeeaf-merged.mount: Deactivated successfully.
Jan 26 12:43:24 np0005596060 podman[96823]: 2026-01-26 17:43:24.97207198 +0000 UTC m=+0.319977380 container remove 1ac33c76de65e4c4d2afc79af305f69bc983c175171588435e38887679a9135b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:43:24 np0005596060 systemd[1]: libpod-conmon-1ac33c76de65e4c4d2afc79af305f69bc983c175171588435e38887679a9135b.scope: Deactivated successfully.
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 26 12:43:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:25.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:25 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 26 12:43:25 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:25 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Jan 26 12:43:25 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Jan 26 12:43:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:26.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:26 np0005596060 podman[96977]: 2026-01-26 17:43:26.348531882 +0000 UTC m=+0.085088012 container create bfd8a8ab2b308ad9282ccbed6ef48e995061c4781c7380a267c945931f29b92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 26 12:43:26 np0005596060 podman[96977]: 2026-01-26 17:43:26.286314889 +0000 UTC m=+0.022870999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:26 np0005596060 systemd[1]: Started libpod-conmon-bfd8a8ab2b308ad9282ccbed6ef48e995061c4781c7380a267c945931f29b92a.scope.
Jan 26 12:43:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:26 np0005596060 podman[96977]: 2026-01-26 17:43:26.509438635 +0000 UTC m=+0.245994825 container init bfd8a8ab2b308ad9282ccbed6ef48e995061c4781c7380a267c945931f29b92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brahmagupta, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 12:43:26 np0005596060 podman[96977]: 2026-01-26 17:43:26.516262314 +0000 UTC m=+0.252818444 container start bfd8a8ab2b308ad9282ccbed6ef48e995061c4781c7380a267c945931f29b92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:43:26 np0005596060 priceless_brahmagupta[96993]: 167 167
Jan 26 12:43:26 np0005596060 systemd[1]: libpod-bfd8a8ab2b308ad9282ccbed6ef48e995061c4781c7380a267c945931f29b92a.scope: Deactivated successfully.
Jan 26 12:43:26 np0005596060 podman[96977]: 2026-01-26 17:43:26.525276528 +0000 UTC m=+0.261832648 container attach bfd8a8ab2b308ad9282ccbed6ef48e995061c4781c7380a267c945931f29b92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 12:43:26 np0005596060 podman[96977]: 2026-01-26 17:43:26.525654087 +0000 UTC m=+0.262210217 container died bfd8a8ab2b308ad9282ccbed6ef48e995061c4781c7380a267c945931f29b92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brahmagupta, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:43:26 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ee559d46cc44b975b48165a838734141ac175055e4082c5e3af6d0a7e9a528b8-merged.mount: Deactivated successfully.
Jan 26 12:43:26 np0005596060 podman[96977]: 2026-01-26 17:43:26.610896462 +0000 UTC m=+0.347452562 container remove bfd8a8ab2b308ad9282ccbed6ef48e995061c4781c7380a267c945931f29b92a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_brahmagupta, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 12:43:26 np0005596060 systemd[1]: libpod-conmon-bfd8a8ab2b308ad9282ccbed6ef48e995061c4781c7380a267c945931f29b92a.scope: Deactivated successfully.
Jan 26 12:43:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 1 active+recovering+remapped, 1 active+remapped, 6 active+recovery_wait+remapped, 297 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 34/215 objects misplaced (15.814%); 134 B/s, 1 keys/s, 2 objects/s recovering
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: Reconfiguring osd.1 (monmap changed)...
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: Reconfiguring daemon osd.1 on compute-0
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:26 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 26 12:43:26 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:26 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 26 12:43:26 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:43:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:27.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:27 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 26 12:43:27 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:27 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Jan 26 12:43:27 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 26 12:43:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:28.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 26 12:43:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v173: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 184 B/s, 5 objects/s recovering
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 26 12:43:28 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:43:28 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:28 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 26 12:43:28 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: Reconfiguring osd.0 (monmap changed)...
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: Reconfiguring daemon osd.0 on compute-1
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 26 12:43:28 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:43:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:29.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:29 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 26 12:43:29 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:29 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 26 12:43:29 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 26 12:43:29 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 26 12:43:29 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 69 pg[9.15( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=14.485254288s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 153.130187988s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:29 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 69 pg[9.15( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=14.484817505s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.130187988s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:29 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 69 pg[9.d( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=14.483719826s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 153.129882812s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:29 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 69 pg[9.d( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=14.483608246s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.129882812s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:29 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 69 pg[9.5( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=14.482700348s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 153.129531860s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:29 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 69 pg[9.5( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=14.482501984s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.129531860s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:29 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 69 pg[9.1d( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=14.482371330s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 153.129547119s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:29 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 69 pg[9.1d( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=14.482281685s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.129547119s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 26 12:43:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:30.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 26 12:43:30 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 70 pg[9.15( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:30 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 70 pg[9.15( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:30 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 70 pg[9.d( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:30 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 70 pg[9.d( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:30 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 70 pg[9.5( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:30 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 70 pg[9.5( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:30 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 70 pg[9.1d( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:30 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 70 pg[9.1d( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:43:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 168 B/s, 5 objects/s recovering
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:31 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.cchxrf (monmap changed)...
Jan 26 12:43:31 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.cchxrf (monmap changed)...
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.cchxrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cchxrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:31 np0005596060 ceph-mgr[74563]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.cchxrf on compute-2
Jan 26 12:43:31 np0005596060 ceph-mgr[74563]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.cchxrf on compute-2
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.cchxrf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[6.e( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=61/61 les/c/f=62/62/0 sis=71) [1] r=0 lpr=71 pi=[61,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[6.6( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=61/61 les/c/f=62/62/0 sis=71) [1] r=0 lpr=71 pi=[61,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.e( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=12.824364662s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 153.130249023s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.e( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=12.824272156s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.130249023s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.6( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=12.823069572s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 153.129806519s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.6( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=12.822950363s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.129806519s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=12.787764549s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 153.095565796s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=12.787726402s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.095565796s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.16( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=12.787752151s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 153.095565796s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.16( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=12.787543297s) [0] r=-1 lpr=71 pi=[55,71)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.095565796s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.1d( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.5( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.d( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:31 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 71 pg[9.15( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[1] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:43:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:31.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:32.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:32 np0005596060 podman[97195]: 2026-01-26 17:43:32.539625115 +0000 UTC m=+0.048027643 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 12:43:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 26 12:43:32 np0005596060 podman[97195]: 2026-01-26 17:43:32.658656998 +0000 UTC m=+0.167059506 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 12:43:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 26 12:43:32 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.15( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=5 ec=55/41 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=14.952692986s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 156.314041138s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.15( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=5 ec=55/41 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=14.952615738s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.314041138s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.e( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.e( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.d( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=6 ec=55/41 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=14.952122688s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 156.314025879s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.d( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=6 ec=55/41 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=14.952047348s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.314025879s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.6( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.6( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.1d( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=5 ec=55/41 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=14.950027466s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 156.312408447s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.1d( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=5 ec=55/41 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=14.949971199s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.312408447s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.5( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=6 ec=55/41 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=14.950234413s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 156.312408447s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.5( v 48'1155 (0'0,48'1155] local-lis/les=70/71 n=6 ec=55/41 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=14.949764252s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 156.312408447s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.16( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[9.16( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[6.6( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=71/72 n=2 ec=51/21 lis/c=61/61 les/c/f=62/62/0 sis=71) [1] r=0 lpr=71 pi=[61,71)/1 crt=48'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:32 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 72 pg[6.e( v 48'39 lc 45'19 (0'0,48'39] local-lis/les=71/72 n=1 ec=51/21 lis/c=61/61 les/c/f=62/62/0 sis=71) [1] r=0 lpr=71 pi=[61,71)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:32 np0005596060 ceph-mon[74267]: Reconfiguring mgr.compute-2.cchxrf (monmap changed)...
Jan 26 12:43:32 np0005596060 ceph-mon[74267]: Reconfiguring daemon mgr.compute-2.cchxrf on compute-2
Jan 26 12:43:32 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 26 12:43:32 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 26 12:43:32 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:32 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 4 active+remapped, 2 peering, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 188 B/s, 6 objects/s recovering
Jan 26 12:43:33 np0005596060 podman[97352]: 2026-01-26 17:43:33.263735692 +0000 UTC m=+0.074245053 container exec e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:43:33 np0005596060 podman[97352]: 2026-01-26 17:43:33.273607787 +0000 UTC m=+0.084117138 container exec_died e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:33 np0005596060 podman[97469]: 2026-01-26 17:43:33.483794952 +0000 UTC m=+0.048008742 container exec 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., version=2.2.4, io.buildah.version=1.28.2, name=keepalived, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, release=1793)
Jan 26 12:43:33 np0005596060 podman[97469]: 2026-01-26 17:43:33.499547623 +0000 UTC m=+0.063761413 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=keepalived, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, description=keepalived for Ceph)
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:33 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 73 pg[9.e( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:33 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 73 pg[9.16( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:33 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 73 pg[9.6( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:33 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 73 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=72) [0]/[1] async=[0] r=0 lpr=72 pi=[55,72)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:33.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:43:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:43:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:34.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 26 12:43:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 26 12:43:34 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 26 12:43:34 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 74 pg[9.e( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=6 ec=55/41 lis/c=72/55 les/c/f=73/56/0 sis=74 pruub=14.912348747s) [0] async=[0] r=-1 lpr=74 pi=[55,74)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 158.384597778s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:34 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 74 pg[9.6( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=6 ec=55/41 lis/c=72/55 les/c/f=73/56/0 sis=74 pruub=14.912234306s) [0] async=[0] r=-1 lpr=74 pi=[55,74)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 158.384826660s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:34 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 74 pg[9.6( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=6 ec=55/41 lis/c=72/55 les/c/f=73/56/0 sis=74 pruub=14.912150383s) [0] r=-1 lpr=74 pi=[55,74)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.384826660s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:34 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 74 pg[9.e( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=6 ec=55/41 lis/c=72/55 les/c/f=73/56/0 sis=74 pruub=14.912073135s) [0] r=-1 lpr=74 pi=[55,74)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.384597778s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:34 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 74 pg[9.16( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=5 ec=55/41 lis/c=72/55 les/c/f=73/56/0 sis=74 pruub=14.911492348s) [0] async=[0] r=-1 lpr=74 pi=[55,74)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 158.384674072s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:34 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 74 pg[9.16( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=5 ec=55/41 lis/c=72/55 les/c/f=73/56/0 sis=74 pruub=14.911395073s) [0] r=-1 lpr=74 pi=[55,74)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.384674072s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:34 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 74 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=5 ec=55/41 lis/c=72/55 les/c/f=73/56/0 sis=74 pruub=14.911385536s) [0] async=[0] r=-1 lpr=74 pi=[55,74)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 158.384841919s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:34 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 74 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=72/73 n=5 ec=55/41 lis/c=72/55 les/c/f=73/56/0 sis=74 pruub=14.911226273s) [0] r=-1 lpr=74 pi=[55,74)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.384841919s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 4 active+remapped, 2 peering, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 188 B/s, 6 objects/s recovering
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:35 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 0c758a50-da1b-473b-bb41-ef1b090c8ecb does not exist
Jan 26 12:43:35 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 0702b6dd-41a2-4745-a995-ad26afadf57f does not exist
Jan 26 12:43:35 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ba5f12f7-1939-4f0b-b6f5-9321befb91da does not exist
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:35.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:35 np0005596060 podman[97777]: 2026-01-26 17:43:35.740461504 +0000 UTC m=+0.024989180 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 26 12:43:35 np0005596060 podman[97777]: 2026-01-26 17:43:35.841107652 +0000 UTC m=+0.125635348 container create 034b0e0953718cf4bccc5f9441a28dba435a9347dd243018e34d6f4bb87e5bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 26 12:43:35 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 26 12:43:35 np0005596060 systemd[1]: Started libpod-conmon-034b0e0953718cf4bccc5f9441a28dba435a9347dd243018e34d6f4bb87e5bf8.scope.
Jan 26 12:43:35 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:35 np0005596060 podman[97777]: 2026-01-26 17:43:35.959616552 +0000 UTC m=+0.244144248 container init 034b0e0953718cf4bccc5f9441a28dba435a9347dd243018e34d6f4bb87e5bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hermann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 12:43:35 np0005596060 podman[97777]: 2026-01-26 17:43:35.97120078 +0000 UTC m=+0.255728446 container start 034b0e0953718cf4bccc5f9441a28dba435a9347dd243018e34d6f4bb87e5bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:43:35 np0005596060 podman[97777]: 2026-01-26 17:43:35.97522758 +0000 UTC m=+0.259755286 container attach 034b0e0953718cf4bccc5f9441a28dba435a9347dd243018e34d6f4bb87e5bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:35 np0005596060 focused_hermann[97793]: 167 167
Jan 26 12:43:35 np0005596060 systemd[1]: libpod-034b0e0953718cf4bccc5f9441a28dba435a9347dd243018e34d6f4bb87e5bf8.scope: Deactivated successfully.
Jan 26 12:43:35 np0005596060 conmon[97793]: conmon 034b0e0953718cf4bccc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-034b0e0953718cf4bccc5f9441a28dba435a9347dd243018e34d6f4bb87e5bf8.scope/container/memory.events
Jan 26 12:43:35 np0005596060 podman[97777]: 2026-01-26 17:43:35.979817164 +0000 UTC m=+0.264344830 container died 034b0e0953718cf4bccc5f9441a28dba435a9347dd243018e34d6f4bb87e5bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hermann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 12:43:36 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2aa11a9c0ed6a3ccb64b911a00c262c0a02230990c0f7ecd78c2b3fc6a0212c3-merged.mount: Deactivated successfully.
Jan 26 12:43:36 np0005596060 podman[97777]: 2026-01-26 17:43:36.03045119 +0000 UTC m=+0.314978886 container remove 034b0e0953718cf4bccc5f9441a28dba435a9347dd243018e34d6f4bb87e5bf8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_hermann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:43:36 np0005596060 systemd[1]: libpod-conmon-034b0e0953718cf4bccc5f9441a28dba435a9347dd243018e34d6f4bb87e5bf8.scope: Deactivated successfully.
Jan 26 12:43:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:36.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:36 np0005596060 podman[97818]: 2026-01-26 17:43:36.224243668 +0000 UTC m=+0.053664942 container create b4c3a917b599308e8f625f3eba14ee62941c00f4c6c975318db70655d5912d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_almeida, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:43:36 np0005596060 systemd[1]: Started libpod-conmon-b4c3a917b599308e8f625f3eba14ee62941c00f4c6c975318db70655d5912d73.scope.
Jan 26 12:43:36 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a1f73b3f13e6735dab2d219ea9335ded20bbfb891ca726984ed2a779dd4ae9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a1f73b3f13e6735dab2d219ea9335ded20bbfb891ca726984ed2a779dd4ae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a1f73b3f13e6735dab2d219ea9335ded20bbfb891ca726984ed2a779dd4ae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a1f73b3f13e6735dab2d219ea9335ded20bbfb891ca726984ed2a779dd4ae9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0a1f73b3f13e6735dab2d219ea9335ded20bbfb891ca726984ed2a779dd4ae9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:36 np0005596060 podman[97818]: 2026-01-26 17:43:36.204417596 +0000 UTC m=+0.033838920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:36 np0005596060 podman[97818]: 2026-01-26 17:43:36.309537155 +0000 UTC m=+0.138958449 container init b4c3a917b599308e8f625f3eba14ee62941c00f4c6c975318db70655d5912d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_almeida, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 12:43:36 np0005596060 podman[97818]: 2026-01-26 17:43:36.315673197 +0000 UTC m=+0.145094471 container start b4c3a917b599308e8f625f3eba14ee62941c00f4c6c975318db70655d5912d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:36 np0005596060 podman[97818]: 2026-01-26 17:43:36.318874306 +0000 UTC m=+0.148295580 container attach b4c3a917b599308e8f625f3eba14ee62941c00f4c6c975318db70655d5912d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_almeida, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:43:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 4 active+remapped, 2 peering, 299 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 53 B/s, 4 objects/s recovering
Jan 26 12:43:37 np0005596060 hungry_almeida[97834]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:43:37 np0005596060 hungry_almeida[97834]: --> relative data size: 1.0
Jan 26 12:43:37 np0005596060 hungry_almeida[97834]: --> All data devices are unavailable
Jan 26 12:43:37 np0005596060 systemd[1]: libpod-b4c3a917b599308e8f625f3eba14ee62941c00f4c6c975318db70655d5912d73.scope: Deactivated successfully.
Jan 26 12:43:37 np0005596060 conmon[97834]: conmon b4c3a917b599308e8f62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4c3a917b599308e8f625f3eba14ee62941c00f4c6c975318db70655d5912d73.scope/container/memory.events
Jan 26 12:43:37 np0005596060 podman[97818]: 2026-01-26 17:43:37.173307727 +0000 UTC m=+1.002729021 container died b4c3a917b599308e8f625f3eba14ee62941c00f4c6c975318db70655d5912d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 12:43:37 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 26 12:43:37 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 26 12:43:37 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a0a1f73b3f13e6735dab2d219ea9335ded20bbfb891ca726984ed2a779dd4ae9-merged.mount: Deactivated successfully.
Jan 26 12:43:37 np0005596060 podman[97818]: 2026-01-26 17:43:37.7820197 +0000 UTC m=+1.611441014 container remove b4c3a917b599308e8f625f3eba14ee62941c00f4c6c975318db70655d5912d73 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:43:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:37.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:37 np0005596060 systemd[1]: libpod-conmon-b4c3a917b599308e8f625f3eba14ee62941c00f4c6c975318db70655d5912d73.scope: Deactivated successfully.
Jan 26 12:43:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:38.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:38 np0005596060 podman[98002]: 2026-01-26 17:43:38.548241021 +0000 UTC m=+0.062654106 container create 5caf45694768bf98afc2a82bd2ceb652cdfe59a9260e2440e453d385801fbfb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:43:38 np0005596060 systemd[1]: Started libpod-conmon-5caf45694768bf98afc2a82bd2ceb652cdfe59a9260e2440e453d385801fbfb7.scope.
Jan 26 12:43:38 np0005596060 podman[98002]: 2026-01-26 17:43:38.525996249 +0000 UTC m=+0.040409374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:38 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:38 np0005596060 podman[98002]: 2026-01-26 17:43:38.794132282 +0000 UTC m=+0.308545377 container init 5caf45694768bf98afc2a82bd2ceb652cdfe59a9260e2440e453d385801fbfb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:43:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 3 objects/s recovering
Jan 26 12:43:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 26 12:43:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 26 12:43:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 26 12:43:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 26 12:43:38 np0005596060 podman[98002]: 2026-01-26 17:43:38.803301439 +0000 UTC m=+0.317714524 container start 5caf45694768bf98afc2a82bd2ceb652cdfe59a9260e2440e453d385801fbfb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:43:38 np0005596060 optimistic_keldysh[98018]: 167 167
Jan 26 12:43:38 np0005596060 systemd[1]: libpod-5caf45694768bf98afc2a82bd2ceb652cdfe59a9260e2440e453d385801fbfb7.scope: Deactivated successfully.
Jan 26 12:43:38 np0005596060 conmon[98018]: conmon 5caf45694768bf98afc2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5caf45694768bf98afc2a82bd2ceb652cdfe59a9260e2440e453d385801fbfb7.scope/container/memory.events
Jan 26 12:43:38 np0005596060 podman[98002]: 2026-01-26 17:43:38.81781203 +0000 UTC m=+0.332225125 container attach 5caf45694768bf98afc2a82bd2ceb652cdfe59a9260e2440e453d385801fbfb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:43:38 np0005596060 podman[98002]: 2026-01-26 17:43:38.818550198 +0000 UTC m=+0.332963273 container died 5caf45694768bf98afc2a82bd2ceb652cdfe59a9260e2440e453d385801fbfb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:43:38 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d3ca1a807ce547f1e396fccb1c545e6b93561003a5a571170e993b216482f081-merged.mount: Deactivated successfully.
Jan 26 12:43:38 np0005596060 podman[98002]: 2026-01-26 17:43:38.857656738 +0000 UTC m=+0.372069833 container remove 5caf45694768bf98afc2a82bd2ceb652cdfe59a9260e2440e453d385801fbfb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_keldysh, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 12:43:38 np0005596060 systemd[1]: libpod-conmon-5caf45694768bf98afc2a82bd2ceb652cdfe59a9260e2440e453d385801fbfb7.scope: Deactivated successfully.
Jan 26 12:43:39 np0005596060 podman[98043]: 2026-01-26 17:43:39.063430977 +0000 UTC m=+0.067937884 container create 7d86d19749ba31842fa4f95a0080dfe91e81357f7f8b79207c94772a38e1274b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:43:39 np0005596060 systemd[1]: Started libpod-conmon-7d86d19749ba31842fa4f95a0080dfe91e81357f7f8b79207c94772a38e1274b.scope.
Jan 26 12:43:39 np0005596060 podman[98043]: 2026-01-26 17:43:39.035054373 +0000 UTC m=+0.039561360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:39 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:39 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3904796e655e10dfa00b9fc398226b7093860afae7e2587093180554d9ceef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:39 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3904796e655e10dfa00b9fc398226b7093860afae7e2587093180554d9ceef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:39 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3904796e655e10dfa00b9fc398226b7093860afae7e2587093180554d9ceef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:39 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc3904796e655e10dfa00b9fc398226b7093860afae7e2587093180554d9ceef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:39 np0005596060 podman[98043]: 2026-01-26 17:43:39.160144534 +0000 UTC m=+0.164651491 container init 7d86d19749ba31842fa4f95a0080dfe91e81357f7f8b79207c94772a38e1274b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:39 np0005596060 podman[98043]: 2026-01-26 17:43:39.167600194 +0000 UTC m=+0.172107121 container start 7d86d19749ba31842fa4f95a0080dfe91e81357f7f8b79207c94772a38e1274b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 12:43:39 np0005596060 podman[98043]: 2026-01-26 17:43:39.171566066 +0000 UTC m=+0.176072973 container attach 7d86d19749ba31842fa4f95a0080dfe91e81357f7f8b79207c94772a38e1274b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:43:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 26 12:43:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 26 12:43:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 26 12:43:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 26 12:43:39 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 26 12:43:39 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 26 12:43:39 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 26 12:43:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:39.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]: {
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:    "1": [
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:        {
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "devices": [
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "/dev/loop3"
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            ],
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "lv_name": "ceph_lv0",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "lv_size": "7511998464",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "name": "ceph_lv0",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "tags": {
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.cluster_name": "ceph",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.crush_device_class": "",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.encrypted": "0",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.osd_id": "1",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.type": "block",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:                "ceph.vdo": "0"
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            },
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "type": "block",
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:            "vg_name": "ceph_vg0"
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:        }
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]:    ]
Jan 26 12:43:39 np0005596060 intelligent_tu[98059]: }
Jan 26 12:43:39 np0005596060 systemd[1]: libpod-7d86d19749ba31842fa4f95a0080dfe91e81357f7f8b79207c94772a38e1274b.scope: Deactivated successfully.
Jan 26 12:43:39 np0005596060 podman[98043]: 2026-01-26 17:43:39.929852567 +0000 UTC m=+0.934359514 container died 7d86d19749ba31842fa4f95a0080dfe91e81357f7f8b79207c94772a38e1274b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:43:39 np0005596060 systemd[1]: var-lib-containers-storage-overlay-bc3904796e655e10dfa00b9fc398226b7093860afae7e2587093180554d9ceef-merged.mount: Deactivated successfully.
Jan 26 12:43:40 np0005596060 podman[98043]: 2026-01-26 17:43:40.011164311 +0000 UTC m=+1.015671228 container remove 7d86d19749ba31842fa4f95a0080dfe91e81357f7f8b79207c94772a38e1274b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tu, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 12:43:40 np0005596060 systemd[1]: libpod-conmon-7d86d19749ba31842fa4f95a0080dfe91e81357f7f8b79207c94772a38e1274b.scope: Deactivated successfully.
Jan 26 12:43:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:40.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 26 12:43:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 26 12:43:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:40 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 26 12:43:40 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 26 12:43:40 np0005596060 podman[98218]: 2026-01-26 17:43:40.678816221 +0000 UTC m=+0.035010954 container create d79c690e264c37943022b08137e3bef53fd720479a79803f84e746a06e219111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ellis, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:43:40 np0005596060 systemd[1]: Started libpod-conmon-d79c690e264c37943022b08137e3bef53fd720479a79803f84e746a06e219111.scope.
Jan 26 12:43:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:40 np0005596060 podman[98218]: 2026-01-26 17:43:40.755100697 +0000 UTC m=+0.111295480 container init d79c690e264c37943022b08137e3bef53fd720479a79803f84e746a06e219111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 12:43:40 np0005596060 podman[98218]: 2026-01-26 17:43:40.662830863 +0000 UTC m=+0.019025616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:40 np0005596060 podman[98218]: 2026-01-26 17:43:40.763914612 +0000 UTC m=+0.120109345 container start d79c690e264c37943022b08137e3bef53fd720479a79803f84e746a06e219111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ellis, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 12:43:40 np0005596060 podman[98218]: 2026-01-26 17:43:40.767218376 +0000 UTC m=+0.123413159 container attach d79c690e264c37943022b08137e3bef53fd720479a79803f84e746a06e219111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 12:43:40 np0005596060 keen_ellis[98234]: 167 167
Jan 26 12:43:40 np0005596060 systemd[1]: libpod-d79c690e264c37943022b08137e3bef53fd720479a79803f84e746a06e219111.scope: Deactivated successfully.
Jan 26 12:43:40 np0005596060 conmon[98234]: conmon d79c690e264c37943022 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d79c690e264c37943022b08137e3bef53fd720479a79803f84e746a06e219111.scope/container/memory.events
Jan 26 12:43:40 np0005596060 podman[98218]: 2026-01-26 17:43:40.769863433 +0000 UTC m=+0.126058186 container died d79c690e264c37943022b08137e3bef53fd720479a79803f84e746a06e219111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ellis, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 12:43:40 np0005596060 systemd[1]: var-lib-containers-storage-overlay-30c951ba03902b7dd0250ebddfbb65f29c45a18f8244a5a7f8da3a8e71a0943a-merged.mount: Deactivated successfully.
Jan 26 12:43:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 3 objects/s recovering
Jan 26 12:43:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 26 12:43:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 26 12:43:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 26 12:43:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 26 12:43:40 np0005596060 podman[98218]: 2026-01-26 17:43:40.824567839 +0000 UTC m=+0.180762592 container remove d79c690e264c37943022b08137e3bef53fd720479a79803f84e746a06e219111 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:43:40 np0005596060 systemd[1]: libpod-conmon-d79c690e264c37943022b08137e3bef53fd720479a79803f84e746a06e219111.scope: Deactivated successfully.
Jan 26 12:43:41 np0005596060 podman[98259]: 2026-01-26 17:43:41.039146762 +0000 UTC m=+0.061462299 container create 41c3bfb0e43918849bef730efc76a5a3babc196db196b4ad44ca480e6ca3bc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_raman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:43:41 np0005596060 systemd[1]: Started libpod-conmon-41c3bfb0e43918849bef730efc76a5a3babc196db196b4ad44ca480e6ca3bc08.scope.
Jan 26 12:43:41 np0005596060 podman[98259]: 2026-01-26 17:43:41.012264136 +0000 UTC m=+0.034579703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:43:41 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5b29831326322615136332d0274e914f6a37fb24a98f8ff857ea4f3aec9cb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5b29831326322615136332d0274e914f6a37fb24a98f8ff857ea4f3aec9cb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5b29831326322615136332d0274e914f6a37fb24a98f8ff857ea4f3aec9cb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e5b29831326322615136332d0274e914f6a37fb24a98f8ff857ea4f3aec9cb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:41 np0005596060 podman[98259]: 2026-01-26 17:43:41.135708225 +0000 UTC m=+0.158023772 container init 41c3bfb0e43918849bef730efc76a5a3babc196db196b4ad44ca480e6ca3bc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 26 12:43:41 np0005596060 podman[98259]: 2026-01-26 17:43:41.146832019 +0000 UTC m=+0.169147556 container start 41c3bfb0e43918849bef730efc76a5a3babc196db196b4ad44ca480e6ca3bc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:43:41 np0005596060 podman[98259]: 2026-01-26 17:43:41.150345178 +0000 UTC m=+0.172660745 container attach 41c3bfb0e43918849bef730efc76a5a3babc196db196b4ad44ca480e6ca3bc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_raman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 12:43:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 26 12:43:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 26 12:43:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 26 12:43:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 26 12:43:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 26 12:43:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 26 12:43:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 77 pg[9.8( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=77 pruub=11.097326279s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 161.130111694s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 77 pg[9.8( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=77 pruub=11.097098351s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.130111694s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 77 pg[9.18( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=77 pruub=11.095988274s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 161.129608154s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 77 pg[9.18( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=77 pruub=11.095867157s) [2] r=-1 lpr=77 pi=[55,77)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 161.129608154s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 77 pg[6.8( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=77 pruub=14.318904877s) [0] r=-1 lpr=77 pi=[51,77)/1 crt=48'39 lcod 0'0 mlcod 0'0 active pruub 164.353012085s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 77 pg[6.8( v 48'39 (0'0,48'39] local-lis/les=51/52 n=0 ec=51/21 lis/c=51/51 les/c/f=52/52/0 sis=77 pruub=14.318775177s) [0] r=-1 lpr=77 pi=[51,77)/1 crt=48'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.353012085s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:41 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 26 12:43:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:41.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:42 np0005596060 magical_raman[98275]: {
Jan 26 12:43:42 np0005596060 magical_raman[98275]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:43:42 np0005596060 magical_raman[98275]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:43:42 np0005596060 magical_raman[98275]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:43:42 np0005596060 magical_raman[98275]:        "osd_id": 1,
Jan 26 12:43:42 np0005596060 magical_raman[98275]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:43:42 np0005596060 magical_raman[98275]:        "type": "bluestore"
Jan 26 12:43:42 np0005596060 magical_raman[98275]:    }
Jan 26 12:43:42 np0005596060 magical_raman[98275]: }
Jan 26 12:43:42 np0005596060 systemd[1]: libpod-41c3bfb0e43918849bef730efc76a5a3babc196db196b4ad44ca480e6ca3bc08.scope: Deactivated successfully.
Jan 26 12:43:42 np0005596060 systemd[1]: libpod-41c3bfb0e43918849bef730efc76a5a3babc196db196b4ad44ca480e6ca3bc08.scope: Consumed 1.012s CPU time.
Jan 26 12:43:42 np0005596060 podman[98259]: 2026-01-26 17:43:42.158890492 +0000 UTC m=+1.181206039 container died 41c3bfb0e43918849bef730efc76a5a3babc196db196b4ad44ca480e6ca3bc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:43:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:42.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:42 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0e5b29831326322615136332d0274e914f6a37fb24a98f8ff857ea4f3aec9cb5-merged.mount: Deactivated successfully.
Jan 26 12:43:42 np0005596060 podman[98259]: 2026-01-26 17:43:42.245577003 +0000 UTC m=+1.267892540 container remove 41c3bfb0e43918849bef730efc76a5a3babc196db196b4ad44ca480e6ca3bc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_raman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:43:42 np0005596060 systemd[1]: libpod-conmon-41c3bfb0e43918849bef730efc76a5a3babc196db196b4ad44ca480e6ca3bc08.scope: Deactivated successfully.
Jan 26 12:43:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:43:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:43:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 26 12:43:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:42 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev acc78056-b473-4ddc-8679-e47c6aeebeee does not exist
Jan 26 12:43:42 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ea22fd76-281e-4e03-822a-6fb9995c9811 does not exist
Jan 26 12:43:42 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b3109031-17f9-4a8b-8ae9-afa75ca69d4e does not exist
Jan 26 12:43:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 26 12:43:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 26 12:43:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 26 12:43:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:42 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 78 pg[9.8( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:42 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 78 pg[9.8( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:42 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 78 pg[9.18( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:42 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 78 pg[9.18( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] r=0 lpr=78 pi=[55,78)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:42 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 26 12:43:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 26 12:43:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 26 12:43:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:43:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:43.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:43:43
Jan 26 12:43:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:43:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Some PGs (0.006557) are unknown; try again later
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:43:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:44.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 26 12:43:44 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 26 12:43:44 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 79 pg[9.18( v 48'1155 (0'0,48'1155] local-lis/les=78/79 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[55,78)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:44 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 79 pg[9.8( v 48'1155 (0'0,48'1155] local-lis/les=78/79 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=78) [2]/[1] async=[2] r=0 lpr=78 pi=[55,78)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:43:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 26 12:43:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 26 12:43:45 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 26 12:43:45 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 80 pg[9.8( v 48'1155 (0'0,48'1155] local-lis/les=78/79 n=6 ec=55/41 lis/c=78/55 les/c/f=79/56/0 sis=80 pruub=15.018932343s) [2] async=[2] r=-1 lpr=80 pi=[55,80)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 169.027236938s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:45 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 80 pg[9.18( v 48'1155 (0'0,48'1155] local-lis/les=78/79 n=5 ec=55/41 lis/c=78/55 les/c/f=79/56/0 sis=80 pruub=15.017723083s) [2] async=[2] r=-1 lpr=80 pi=[55,80)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 169.026077271s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:45 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 80 pg[9.8( v 48'1155 (0'0,48'1155] local-lis/les=78/79 n=6 ec=55/41 lis/c=78/55 les/c/f=79/56/0 sis=80 pruub=15.018738747s) [2] r=-1 lpr=80 pi=[55,80)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.027236938s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:45 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 80 pg[9.18( v 48'1155 (0'0,48'1155] local-lis/les=78/79 n=5 ec=55/41 lis/c=78/55 les/c/f=79/56/0 sis=80 pruub=15.017310143s) [2] r=-1 lpr=80 pi=[55,80)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.026077271s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:45.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:46.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 26 12:43:46 np0005596060 python3[98385]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:43:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 26 12:43:46 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 26 12:43:46 np0005596060 podman[98386]: 2026-01-26 17:43:46.521019836 +0000 UTC m=+0.028133818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:43:46 np0005596060 podman[98386]: 2026-01-26 17:43:46.710671574 +0000 UTC m=+0.217785526 container create 1dcbb2030552a26d917c3a0c02ba825886c33870f5010f6394188a19bf268e05 (image=quay.io/ceph/ceph:v18, name=pensive_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:43:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 2 unknown, 303 active+clean; 457 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:43:47 np0005596060 systemd[1]: Started libpod-conmon-1dcbb2030552a26d917c3a0c02ba825886c33870f5010f6394188a19bf268e05.scope.
Jan 26 12:43:47 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efbd59d7caa7a92897895cd3e06a518a18f9844ea70d77d5872dea3892b3182/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8efbd59d7caa7a92897895cd3e06a518a18f9844ea70d77d5872dea3892b3182/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:47 np0005596060 podman[98386]: 2026-01-26 17:43:47.138462046 +0000 UTC m=+0.645576048 container init 1dcbb2030552a26d917c3a0c02ba825886c33870f5010f6394188a19bf268e05 (image=quay.io/ceph/ceph:v18, name=pensive_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 12:43:47 np0005596060 podman[98386]: 2026-01-26 17:43:47.152315019 +0000 UTC m=+0.659428971 container start 1dcbb2030552a26d917c3a0c02ba825886c33870f5010f6394188a19bf268e05 (image=quay.io/ceph/ceph:v18, name=pensive_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:43:47 np0005596060 podman[98386]: 2026-01-26 17:43:47.159575754 +0000 UTC m=+0.666689706 container attach 1dcbb2030552a26d917c3a0c02ba825886c33870f5010f6394188a19bf268e05 (image=quay.io/ceph/ceph:v18, name=pensive_poincare, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:43:47 np0005596060 pensive_poincare[98402]: could not fetch user info: no user info saved
Jan 26 12:43:47 np0005596060 systemd[1]: libpod-1dcbb2030552a26d917c3a0c02ba825886c33870f5010f6394188a19bf268e05.scope: Deactivated successfully.
Jan 26 12:43:47 np0005596060 podman[98386]: 2026-01-26 17:43:47.440651713 +0000 UTC m=+0.947765625 container died 1dcbb2030552a26d917c3a0c02ba825886c33870f5010f6394188a19bf268e05 (image=quay.io/ceph/ceph:v18, name=pensive_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 12:43:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8efbd59d7caa7a92897895cd3e06a518a18f9844ea70d77d5872dea3892b3182-merged.mount: Deactivated successfully.
Jan 26 12:43:47 np0005596060 podman[98386]: 2026-01-26 17:43:47.491346227 +0000 UTC m=+0.998460179 container remove 1dcbb2030552a26d917c3a0c02ba825886c33870f5010f6394188a19bf268e05 (image=quay.io/ceph/ceph:v18, name=pensive_poincare, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:43:47 np0005596060 systemd[1]: libpod-conmon-1dcbb2030552a26d917c3a0c02ba825886c33870f5010f6394188a19bf268e05.scope: Deactivated successfully.
Jan 26 12:43:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:47.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:47 np0005596060 python3[98524]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid d4cd1917-5876-51b6-bc64-65a16199754d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:43:47 np0005596060 podman[98525]: 2026-01-26 17:43:47.889283597 +0000 UTC m=+0.043298366 container create 9bd41bbcaf7e52a4354b9ac9b5bb69c710924dceebd0f271851012bb45255f49 (image=quay.io/ceph/ceph:v18, name=cranky_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 12:43:47 np0005596060 systemd[1]: Started libpod-conmon-9bd41bbcaf7e52a4354b9ac9b5bb69c710924dceebd0f271851012bb45255f49.scope.
Jan 26 12:43:47 np0005596060 podman[98525]: 2026-01-26 17:43:47.870322213 +0000 UTC m=+0.024337012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 26 12:43:47 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:43:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffffbaabd0b1aef1d9936d2b78176fe474a525d4d080be0ca774043cb6f6893b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffffbaabd0b1aef1d9936d2b78176fe474a525d4d080be0ca774043cb6f6893b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:43:47 np0005596060 podman[98525]: 2026-01-26 17:43:47.995585138 +0000 UTC m=+0.149600137 container init 9bd41bbcaf7e52a4354b9ac9b5bb69c710924dceebd0f271851012bb45255f49 (image=quay.io/ceph/ceph:v18, name=cranky_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:48 np0005596060 podman[98525]: 2026-01-26 17:43:48.001603742 +0000 UTC m=+0.155618551 container start 9bd41bbcaf7e52a4354b9ac9b5bb69c710924dceebd0f271851012bb45255f49 (image=quay.io/ceph/ceph:v18, name=cranky_jackson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:48 np0005596060 podman[98525]: 2026-01-26 17:43:48.018618486 +0000 UTC m=+0.172633345 container attach 9bd41bbcaf7e52a4354b9ac9b5bb69c710924dceebd0f271851012bb45255f49 (image=quay.io/ceph/ceph:v18, name=cranky_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:43:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:43:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:48.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]: {
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "user_id": "openstack",
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "display_name": "openstack",
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "email": "",
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "suspended": 0,
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "max_buckets": 1000,
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "subusers": [],
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "keys": [
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        {
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:            "user": "openstack",
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:            "access_key": "Q88TJQ6GG1Z792B1D4VS",
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:            "secret_key": "CSDCcUrQXMGynw8pTD8eaNjzb7UYqiYu3cHmiq7L"
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        }
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    ],
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "swift_keys": [],
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "caps": [],
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "op_mask": "read, write, delete",
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "default_placement": "",
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "default_storage_class": "",
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "placement_tags": [],
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "bucket_quota": {
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        "enabled": false,
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        "check_on_raw": false,
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        "max_size": -1,
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        "max_size_kb": 0,
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        "max_objects": -1
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    },
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "user_quota": {
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        "enabled": false,
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        "check_on_raw": false,
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        "max_size": -1,
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        "max_size_kb": 0,
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:        "max_objects": -1
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    },
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "temp_url_keys": [],
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "type": "rgw",
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]:    "mfa_ids": []
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]: }
Jan 26 12:43:48 np0005596060 cranky_jackson[98540]: 
Jan 26 12:43:48 np0005596060 systemd[1]: libpod-9bd41bbcaf7e52a4354b9ac9b5bb69c710924dceebd0f271851012bb45255f49.scope: Deactivated successfully.
Jan 26 12:43:48 np0005596060 podman[98525]: 2026-01-26 17:43:48.722679514 +0000 UTC m=+0.876694323 container died 9bd41bbcaf7e52a4354b9ac9b5bb69c710924dceebd0f271851012bb45255f49 (image=quay.io/ceph/ceph:v18, name=cranky_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:43:48 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ffffbaabd0b1aef1d9936d2b78176fe474a525d4d080be0ca774043cb6f6893b-merged.mount: Deactivated successfully.
Jan 26 12:43:48 np0005596060 podman[98525]: 2026-01-26 17:43:48.78683723 +0000 UTC m=+0.940852029 container remove 9bd41bbcaf7e52a4354b9ac9b5bb69c710924dceebd0f271851012bb45255f49 (image=quay.io/ceph/ceph:v18, name=cranky_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:43:48 np0005596060 systemd[1]: libpod-conmon-9bd41bbcaf7e52a4354b9ac9b5bb69c710924dceebd0f271851012bb45255f49.scope: Deactivated successfully.
Jan 26 12:43:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 170 B/s wr, 31 op/s; 36 B/s, 1 objects/s recovering
Jan 26 12:43:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 26 12:43:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 26 12:43:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 26 12:43:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 26 12:43:49 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 26 12:43:49 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 26 12:43:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 26 12:43:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 26 12:43:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 26 12:43:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 26 12:43:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 26 12:43:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 26 12:43:49 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 26 12:43:49 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 82 pg[9.9( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=82 pruub=10.807430267s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 169.130233765s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:49 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 82 pg[9.9( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=82 pruub=10.807010651s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.130233765s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:49 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 82 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=82 pruub=10.805469513s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 169.129272461s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:49 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 82 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=82 pruub=10.805381775s) [2] r=-1 lpr=82 pi=[55,82)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.129272461s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:49 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 82 pg[6.9( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=59/59 les/c/f=60/60/0 sis=82) [1] r=0 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:49.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:50.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 26 12:43:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 26 12:43:50 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 26 12:43:50 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 83 pg[9.9( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[1] r=0 lpr=83 pi=[55,83)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:50 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 83 pg[9.9( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[1] r=0 lpr=83 pi=[55,83)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:50 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 83 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[1] r=0 lpr=83 pi=[55,83)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:50 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 83 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[1] r=0 lpr=83 pi=[55,83)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:50 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 83 pg[6.9( v 48'39 (0'0,48'39] local-lis/les=82/83 n=1 ec=51/21 lis/c=59/59 les/c/f=60/60/0 sis=82) [1] r=0 lpr=82 pi=[59,82)/1 crt=48'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:50 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 26 12:43:50 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 26 12:43:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 186 B/s wr, 34 op/s; 40 B/s, 2 objects/s recovering
Jan 26 12:43:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 26 12:43:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 26 12:43:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 26 12:43:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 26 12:43:51 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 26 12:43:51 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 26 12:43:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 26 12:43:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 26 12:43:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 26 12:43:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 26 12:43:51 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 26 12:43:51 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 84 pg[9.a( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=84 pruub=9.083811760s) [0] r=-1 lpr=84 pi=[55,84)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 169.129898071s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:51 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 84 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=84 pruub=9.082993507s) [0] r=-1 lpr=84 pi=[55,84)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 169.129837036s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:51 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 84 pg[9.a( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=84 pruub=9.083708763s) [0] r=-1 lpr=84 pi=[55,84)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.129898071s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:51 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 84 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=84 pruub=9.082954407s) [0] r=-1 lpr=84 pi=[55,84)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.129837036s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:51 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 84 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=83/84 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[55,83)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:51 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 84 pg[9.9( v 48'1155 (0'0,48'1155] local-lis/les=83/84 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[55,83)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 26 12:43:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 26 12:43:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 26 12:43:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 26 12:43:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:43:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:51.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:43:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:52.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 26 12:43:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 26 12:43:52 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 26 12:43:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 85 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=85) [0]/[1] r=0 lpr=85 pi=[55,85)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 85 pg[9.a( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=85) [0]/[1] r=0 lpr=85 pi=[55,85)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 85 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=85) [0]/[1] r=0 lpr=85 pi=[55,85)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 85 pg[9.a( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=85) [0]/[1] r=0 lpr=85 pi=[55,85)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 85 pg[9.9( v 48'1155 (0'0,48'1155] local-lis/les=83/84 n=6 ec=55/41 lis/c=83/55 les/c/f=84/56/0 sis=85 pruub=14.997102737s) [2] async=[2] r=-1 lpr=85 pi=[55,85)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 176.057891846s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 85 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=83/84 n=5 ec=55/41 lis/c=83/55 les/c/f=84/56/0 sis=85 pruub=14.994447708s) [2] async=[2] r=-1 lpr=85 pi=[55,85)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 176.055267334s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 85 pg[9.9( v 48'1155 (0'0,48'1155] local-lis/les=83/84 n=6 ec=55/41 lis/c=83/55 les/c/f=84/56/0 sis=85 pruub=14.996857643s) [2] r=-1 lpr=85 pi=[55,85)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 176.057891846s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 85 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=83/84 n=5 ec=55/41 lis/c=83/55 les/c/f=84/56/0 sis=85 pruub=14.994173050s) [2] r=-1 lpr=85 pi=[55,85)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 176.055267334s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 255 B/s wr, 3 op/s; 82 B/s, 2 objects/s recovering
Jan 26 12:43:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 26 12:43:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 26 12:43:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 26 12:43:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 26 12:43:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 26 12:43:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 26 12:43:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 26 12:43:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 26 12:43:53 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 26 12:43:53 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 86 pg[6.b( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=63/63 les/c/f=64/64/0 sis=86) [1] r=0 lpr=86 pi=[63,86)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:43:53 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 86 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=85/86 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=85) [0]/[1] async=[0] r=0 lpr=85 pi=[55,85)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:53 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 86 pg[9.a( v 48'1155 (0'0,48'1155] local-lis/les=85/86 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=85) [0]/[1] async=[0] r=0 lpr=85 pi=[55,85)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:53 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 26 12:43:53 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 26 12:43:53 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 26 12:43:53 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 26 12:43:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:43:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:53.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:43:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:43:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:54.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:43:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 26 12:43:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 26 12:43:54 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 26 12:43:54 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 87 pg[9.a( v 48'1155 (0'0,48'1155] local-lis/les=85/86 n=6 ec=55/41 lis/c=85/55 les/c/f=86/56/0 sis=87 pruub=14.912722588s) [0] async=[0] r=-1 lpr=87 pi=[55,87)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 178.077209473s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:54 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 87 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=85/86 n=5 ec=55/41 lis/c=85/55 les/c/f=86/56/0 sis=87 pruub=14.911294937s) [0] async=[0] r=-1 lpr=87 pi=[55,87)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 178.076614380s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:54 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 87 pg[9.a( v 48'1155 (0'0,48'1155] local-lis/les=85/86 n=6 ec=55/41 lis/c=85/55 les/c/f=86/56/0 sis=87 pruub=14.911451340s) [0] r=-1 lpr=87 pi=[55,87)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.077209473s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:54 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 87 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=85/86 n=5 ec=55/41 lis/c=85/55 les/c/f=86/56/0 sis=87 pruub=14.910916328s) [0] r=-1 lpr=87 pi=[55,87)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 178.076614380s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:43:54 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 87 pg[6.b( v 48'39 lc 0'0 (0'0,48'39] local-lis/les=86/87 n=1 ec=51/21 lis/c=63/63 les/c/f=64/64/0 sis=86) [1] r=0 lpr=86 pi=[63,86)/1 crt=48'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:43:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 255 B/s wr, 3 op/s; 82 B/s, 2 objects/s recovering
Jan 26 12:43:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 26 12:43:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 26 12:43:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 26 12:43:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 26 12:43:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:43:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 26 12:43:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 26 12:43:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 26 12:43:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 26 12:43:55 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 26 12:43:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:55.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 26 12:43:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 26 12:43:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 26 12:43:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 26 12:43:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:56.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 2 active+remapped, 303 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 49 B/s, 2 objects/s recovering
Jan 26 12:43:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 26 12:43:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 26 12:43:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 26 12:43:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 26 12:43:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 26 12:43:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 26 12:43:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 26 12:43:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 26 12:43:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 26 12:43:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 26 12:43:57 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 26 12:43:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:57.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:43:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:43:58.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:43:58 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 26 12:43:58 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 26 12:43:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 26 12:43:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 26 12:43:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 26 12:43:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 26 12:43:58 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 26 12:43:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 3 objects/s recovering
Jan 26 12:43:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 26 12:43:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 26 12:43:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 26 12:43:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 26 12:43:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 26 12:43:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 26 12:43:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 26 12:43:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 26 12:43:59 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 26 12:43:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 26 12:43:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 26 12:43:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:43:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:43:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:43:59.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:43:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 91 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=71/72 n=1 ec=51/21 lis/c=71/71 les/c/f=72/72/0 sis=91 pruub=12.809213638s) [0] r=-1 lpr=91 pi=[71,91)/1 crt=48'39 mlcod 48'39 active pruub 181.366500854s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:43:59 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 91 pg[6.e( v 48'39 (0'0,48'39] local-lis/les=71/72 n=1 ec=51/21 lis/c=71/71 les/c/f=72/72/0 sis=91 pruub=12.809147835s) [0] r=-1 lpr=91 pi=[71,91)/1 crt=48'39 mlcod 0'0 unknown NOTIFY pruub 181.366500854s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:44:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:00.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:00 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 26 12:44:00 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 26 12:44:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 26 12:44:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 26 12:44:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 26 12:44:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 26 12:44:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 26 12:44:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 26 12:44:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 26 12:44:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 26 12:44:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 26 12:44:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 26 12:44:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 26 12:44:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 26 12:44:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 26 12:44:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 26 12:44:01 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 26 12:44:01 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 93 pg[6.f( empty local-lis/les=0/0 n=0 ec=51/21 lis/c=63/63 les/c/f=64/64/0 sis=93) [1] r=0 lpr=93 pi=[63,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:44:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:44:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:01.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:44:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 26 12:44:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 26 12:44:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 26 12:44:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 26 12:44:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:02.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 26 12:44:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 26 12:44:02 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 26 12:44:02 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 94 pg[6.f( v 48'39 lc 45'1 (0'0,48'39] local-lis/les=93/94 n=1 ec=51/21 lis/c=63/63 les/c/f=64/64/0 sis=93) [1] r=0 lpr=93 pi=[63,93)/1 crt=48'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:44:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 3 peering, 302 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:44:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:44:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 26 12:44:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 26 12:44:03 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 26 12:44:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:03.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:04.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 26 12:44:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 26 12:44:04 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 26 12:44:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 3 peering, 302 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 82 B/s, 3 objects/s recovering
Jan 26 12:44:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 26 12:44:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 26 12:44:05 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 26 12:44:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:05.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:06.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 peering, 304 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 50 B/s, 2 objects/s recovering
Jan 26 12:44:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:07.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:08.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 137 B/s, 2 objects/s recovering
Jan 26 12:44:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Jan 26 12:44:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 26 12:44:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 26 12:44:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 26 12:44:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 26 12:44:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 26 12:44:09 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 26 12:44:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 98 pg[9.10( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=98 pruub=15.339726448s) [0] r=-1 lpr=98 pi=[55,98)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 193.129623413s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 98 pg[9.10( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=98 pruub=15.339633942s) [0] r=-1 lpr=98 pi=[55,98)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.129623413s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:44:09 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 26 12:44:09 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 26 12:44:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:09.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 26 12:44:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 26 12:44:10 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 26 12:44:10 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 99 pg[9.10( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=99) [0]/[1] r=0 lpr=99 pi=[55,99)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:10 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 99 pg[9.10( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=99) [0]/[1] r=0 lpr=99 pi=[55,99)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:44:10 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 26 12:44:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:10.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:10 np0005596060 systemd-logind[786]: New session 34 of user zuul.
Jan 26 12:44:10 np0005596060 systemd[1]: Started Session 34 of User zuul.
Jan 26 12:44:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 137 B/s, 2 objects/s recovering
Jan 26 12:44:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Jan 26 12:44:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 26 12:44:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 26 12:44:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 26 12:44:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 26 12:44:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 26 12:44:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 100 pg[9.11( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=100 pruub=13.314075470s) [0] r=-1 lpr=100 pi=[55,100)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 193.130111694s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 100 pg[9.11( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=100 pruub=13.313994408s) [0] r=-1 lpr=100 pi=[55,100)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.130111694s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:44:11 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 100 pg[9.10( v 48'1155 (0'0,48'1155] local-lis/les=99/100 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=99) [0]/[1] async=[0] r=0 lpr=99 pi=[55,99)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:44:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 26 12:44:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 26 12:44:11 np0005596060 python3.9[98851]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:44:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:11.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 26 12:44:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 26 12:44:12 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 26 12:44:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 101 pg[9.11( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=101) [0]/[1] r=0 lpr=101 pi=[55,101)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 101 pg[9.10( v 48'1155 (0'0,48'1155] local-lis/les=99/100 n=6 ec=55/41 lis/c=99/55 les/c/f=100/56/0 sis=101 pruub=14.998292923s) [0] async=[0] r=-1 lpr=101 pi=[55,101)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 195.817550659s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 101 pg[9.11( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=101) [0]/[1] r=0 lpr=101 pi=[55,101)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:44:12 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 101 pg[9.10( v 48'1155 (0'0,48'1155] local-lis/les=99/100 n=6 ec=55/41 lis/c=99/55 les/c/f=100/56/0 sis=101 pruub=14.998230934s) [0] r=-1 lpr=101 pi=[55,101)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 195.817550659s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:44:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:44:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 26 12:44:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 26 12:44:13 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 26 12:44:13 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 102 pg[9.11( v 48'1155 (0'0,48'1155] local-lis/les=101/102 n=6 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=101) [0]/[1] async=[0] r=0 lpr=101 pi=[55,101)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:44:13 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 26 12:44:13 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 26 12:44:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:13.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:44:14 np0005596060 python3.9[99116]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:44:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 26 12:44:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 26 12:44:14 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 26 12:44:14 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 103 pg[9.11( v 48'1155 (0'0,48'1155] local-lis/les=101/102 n=6 ec=55/41 lis/c=101/55 les/c/f=102/56/0 sis=103 pruub=14.997906685s) [0] async=[0] r=-1 lpr=103 pi=[55,103)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 197.841003418s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:14 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 103 pg[9.11( v 48'1155 (0'0,48'1155] local-lis/les=101/102 n=6 ec=55/41 lis/c=101/55 les/c/f=102/56/0 sis=103 pruub=14.997784615s) [0] r=-1 lpr=103 pi=[55,103)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 197.841003418s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:44:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:14.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:44:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 26 12:44:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 26 12:44:15 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 26 12:44:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:15 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 26 12:44:15 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 26 12:44:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:44:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:15.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:44:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:16.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 457 KiB data, 126 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 218 B/s wr, 20 op/s; 23 B/s, 1 objects/s recovering
Jan 26 12:44:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Jan 26 12:44:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 26 12:44:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 26 12:44:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:17.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 26 12:44:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:18.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 26 12:44:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 26 12:44:18 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 26 12:44:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 9.1 KiB/s rd, 0 B/s wr, 16 op/s; 19 B/s, 1 objects/s recovering
Jan 26 12:44:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Jan 26 12:44:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 26 12:44:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 26 12:44:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 26 12:44:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 26 12:44:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 26 12:44:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 26 12:44:19 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 26 12:44:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:19.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:20.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 26 12:44:20 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 26 12:44:20 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 26 12:44:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 105 pg[9.12( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=105 pruub=11.808699608s) [0] r=-1 lpr=105 pi=[55,105)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 201.130218506s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:20 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 106 pg[9.12( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=105 pruub=11.808613777s) [0] r=-1 lpr=105 pi=[55,105)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.130218506s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:44:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 0 B/s wr, 15 op/s; 18 B/s, 1 objects/s recovering
Jan 26 12:44:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Jan 26 12:44:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 26 12:44:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 26 12:44:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 26 12:44:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 26 12:44:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 26 12:44:21 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 26 12:44:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 107 pg[9.12( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=107) [0]/[1] r=0 lpr=107 pi=[55,107)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:21 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 107 pg[9.12( v 48'1155 (0'0,48'1155] local-lis/les=55/56 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=107) [0]/[1] r=0 lpr=107 pi=[55,107)/1 crt=48'1155 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 26 12:44:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:44:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:21.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:44:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:22.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 26 12:44:22 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 26 12:44:22 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 26 12:44:22 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 26 12:44:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 26 12:44:22 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 26 12:44:22 np0005596060 systemd-logind[786]: Session 34 logged out. Waiting for processes to exit.
Jan 26 12:44:22 np0005596060 systemd[1]: session-34.scope: Deactivated successfully.
Jan 26 12:44:22 np0005596060 systemd[1]: session-34.scope: Consumed 8.861s CPU time.
Jan 26 12:44:22 np0005596060 systemd-logind[786]: Removed session 34.
Jan 26 12:44:22 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 108 pg[9.12( v 48'1155 (0'0,48'1155] local-lis/les=107/108 n=5 ec=55/41 lis/c=55/55 les/c/f=56/56/0 sis=107) [0]/[1] async=[0] r=0 lpr=107 pi=[55,107)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:44:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:44:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:23.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 26 12:44:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 26 12:44:24 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 26 12:44:24 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 109 pg[9.12( v 48'1155 (0'0,48'1155] local-lis/les=107/108 n=5 ec=55/41 lis/c=107/55 les/c/f=108/56/0 sis=109 pruub=14.780590057s) [0] async=[0] r=-1 lpr=109 pi=[55,109)/1 crt=48'1155 lcod 0'0 mlcod 0'0 active pruub 207.518936157s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:24 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 109 pg[9.12( v 48'1155 (0'0,48'1155] local-lis/les=107/108 n=5 ec=55/41 lis/c=107/55 les/c/f=108/56/0 sis=109 pruub=14.780081749s) [0] r=-1 lpr=109 pi=[55,109)/1 crt=48'1155 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 207.518936157s@ mbc={}] state<Start>: transitioning to Stray
Jan 26 12:44:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:24.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:44:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 26 12:44:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 26 12:44:25 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 26 12:44:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:25 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 26 12:44:25 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 26 12:44:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:25.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:26.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 26 12:44:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Jan 26 12:44:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 26 12:44:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 26 12:44:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:27.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 26 12:44:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 26 12:44:28 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 26 12:44:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:28.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:28 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 26 12:44:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 26 12:44:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Jan 26 12:44:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 26 12:44:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 26 12:44:29 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 26 12:44:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 26 12:44:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 26 12:44:29 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 26 12:44:29 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 26 12:44:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 26 12:44:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 26 12:44:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 26 12:44:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:29.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:30.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 26 12:44:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 26 12:44:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Jan 26 12:44:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 26 12:44:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 26 12:44:30 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 26 12:44:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 26 12:44:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:31.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 26 12:44:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 26 12:44:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 26 12:44:32 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 26 12:44:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:32.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 23 B/s, 0 objects/s recovering
Jan 26 12:44:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 26 12:44:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 26 12:44:33 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 26 12:44:33 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 26 12:44:33 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 26 12:44:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:33.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 26 12:44:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 26 12:44:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 26 12:44:34 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 26 12:44:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:34.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:34 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 26 12:44:34 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 26 12:44:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 1 remapped+peering, 1 active+remapped, 303 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 26 12:44:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:35.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:36 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 26 12:44:36 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 26 12:44:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:36.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 1 remapped+peering, 304 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Jan 26 12:44:37 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 26 12:44:37 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 26 12:44:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:37.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:38.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:38 np0005596060 systemd-logind[786]: New session 35 of user zuul.
Jan 26 12:44:38 np0005596060 systemd[1]: Started Session 35 of User zuul.
Jan 26 12:44:38 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.9 deep-scrub starts
Jan 26 12:44:38 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.9 deep-scrub ok
Jan 26 12:44:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 16 B/s, 0 objects/s recovering
Jan 26 12:44:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Jan 26 12:44:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 26 12:44:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 26 12:44:39 np0005596060 python3.9[99389]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 26 12:44:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 26 12:44:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 26 12:44:39 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 26 12:44:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:39.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 26 12:44:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 26 12:44:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:40.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:40 np0005596060 python3.9[99563]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:44:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 458 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 0 objects/s recovering
Jan 26 12:44:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Jan 26 12:44:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 26 12:44:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 26 12:44:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 26 12:44:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 26 12:44:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 26 12:44:41 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 26 12:44:41 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 118 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=85/85 les/c/f=86/86/0 sis=118) [1] r=0 lpr=118 pi=[85,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:44:41 np0005596060 python3.9[99720]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:44:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:41.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 26 12:44:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:42.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 26 12:44:42 np0005596060 python3.9[99873]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:44:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:44:42 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 26 12:44:42 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 119 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=85/85 les/c/f=86/86/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[85,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:42 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 119 pg[9.19( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=85/85 les/c/f=86/86/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[85,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 26 12:44:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 26 12:44:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 26 12:44:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:43.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:43 np0005596060 python3.9[100172]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:44:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:44:43
Jan 26 12:44:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:44:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:44:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 26 12:44:44 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 26 12:44:44 np0005596060 podman[100200]: 2026-01-26 17:44:44.126711325 +0000 UTC m=+0.360371596 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:44:44 np0005596060 podman[100200]: 2026-01-26 17:44:44.245138742 +0000 UTC m=+0.478799023 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:44:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:44.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:44:44 np0005596060 python3.9[100421]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:44:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:44:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:44:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:44 np0005596060 podman[100530]: 2026-01-26 17:44:44.95323329 +0000 UTC m=+0.063405755 container exec e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:44:44 np0005596060 podman[100530]: 2026-01-26 17:44:44.989517268 +0000 UTC m=+0.099689733 container exec_died e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 26 12:44:45 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 121 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=0/0 n=5 ec=55/41 lis/c=119/85 les/c/f=120/86/0 sis=121) [1] r=0 lpr=121 pi=[85,121)/1 luod=0'0 crt=48'1155 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:45 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 121 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=0/0 n=5 ec=55/41 lis/c=119/85 les/c/f=120/86/0 sis=121) [1] r=0 lpr=121 pi=[85,121)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:45 np0005596060 podman[100649]: 2026-01-26 17:44:45.214782609 +0000 UTC m=+0.073214054 container exec 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, description=keepalived for Ceph, name=keepalived, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public)
Jan 26 12:44:45 np0005596060 podman[100649]: 2026-01-26 17:44:45.231584734 +0000 UTC m=+0.090016159 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, version=2.2.4, release=1793, vendor=Red Hat, Inc., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git, description=keepalived for Ceph, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:45 np0005596060 python3.9[100784]: ansible-ansible.builtin.service_facts Invoked
Jan 26 12:44:45 np0005596060 network[100872]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 12:44:45 np0005596060 network[100873]: 'network-scripts' will be removed from distribution in near future.
Jan 26 12:44:45 np0005596060 network[100874]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 12:44:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:45.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:46 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9bd3671a-818a-46fe-b816-f30b094251a2 does not exist
Jan 26 12:44:46 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8ca80cc1-711d-4645-b4fa-365ac2ca6139 does not exist
Jan 26 12:44:46 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8f58b12a-436f-4c3a-b0ed-97e277fed745 does not exist
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:44:46 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 26 12:44:46 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 122 pg[9.19( v 48'1155 (0'0,48'1155] local-lis/les=121/122 n=5 ec=55/41 lis/c=119/85 les/c/f=120/86/0 sis=121) [1] r=0 lpr=121 pi=[85,121)/1 crt=48'1155 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:44:46 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 26 12:44:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:46.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:46 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 26 12:44:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 1 unknown, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:44:46 np0005596060 podman[101071]: 2026-01-26 17:44:46.988475291 +0000 UTC m=+0.091911767 container create f2d7d1c8df4a002f9d9c7a0a722f6cf03261f5069730921763e9436e26b2d7a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 26 12:44:47 np0005596060 podman[101071]: 2026-01-26 17:44:46.929082198 +0000 UTC m=+0.032518744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:44:47 np0005596060 systemd[1]: Started libpod-conmon-f2d7d1c8df4a002f9d9c7a0a722f6cf03261f5069730921763e9436e26b2d7a5.scope.
Jan 26 12:44:47 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:44:47 np0005596060 podman[101071]: 2026-01-26 17:44:47.120202475 +0000 UTC m=+0.223638941 container init f2d7d1c8df4a002f9d9c7a0a722f6cf03261f5069730921763e9436e26b2d7a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:44:47 np0005596060 podman[101071]: 2026-01-26 17:44:47.129443478 +0000 UTC m=+0.232879934 container start f2d7d1c8df4a002f9d9c7a0a722f6cf03261f5069730921763e9436e26b2d7a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 12:44:47 np0005596060 podman[101071]: 2026-01-26 17:44:47.133219744 +0000 UTC m=+0.236656230 container attach f2d7d1c8df4a002f9d9c7a0a722f6cf03261f5069730921763e9436e26b2d7a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 12:44:47 np0005596060 xenodochial_tesla[101097]: 167 167
Jan 26 12:44:47 np0005596060 podman[101071]: 2026-01-26 17:44:47.138763164 +0000 UTC m=+0.242199660 container died f2d7d1c8df4a002f9d9c7a0a722f6cf03261f5069730921763e9436e26b2d7a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:44:47 np0005596060 systemd[1]: libpod-f2d7d1c8df4a002f9d9c7a0a722f6cf03261f5069730921763e9436e26b2d7a5.scope: Deactivated successfully.
Jan 26 12:44:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-426f467f5db3ef4e8a958d7f53bde41610d557beb074e9758842a0a2b9785dbc-merged.mount: Deactivated successfully.
Jan 26 12:44:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:44:47 np0005596060 podman[101071]: 2026-01-26 17:44:47.223130289 +0000 UTC m=+0.326566755 container remove f2d7d1c8df4a002f9d9c7a0a722f6cf03261f5069730921763e9436e26b2d7a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 12:44:47 np0005596060 systemd[1]: libpod-conmon-f2d7d1c8df4a002f9d9c7a0a722f6cf03261f5069730921763e9436e26b2d7a5.scope: Deactivated successfully.
Jan 26 12:44:47 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 26 12:44:47 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 26 12:44:47 np0005596060 podman[101131]: 2026-01-26 17:44:47.446745948 +0000 UTC m=+0.056021168 container create 84dfd80d237c403ff6066abaaa55dbdf1b8b66b3d78acf315d4be28e24efa0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 12:44:47 np0005596060 systemd[1]: Started libpod-conmon-84dfd80d237c403ff6066abaaa55dbdf1b8b66b3d78acf315d4be28e24efa0e6.scope.
Jan 26 12:44:47 np0005596060 podman[101131]: 2026-01-26 17:44:47.42707372 +0000 UTC m=+0.036349020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:44:47 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:44:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43af2a4d55cddbfb8a9340a57b35cbfec2d5913fc70b6a4b390f1d185b49ad69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43af2a4d55cddbfb8a9340a57b35cbfec2d5913fc70b6a4b390f1d185b49ad69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43af2a4d55cddbfb8a9340a57b35cbfec2d5913fc70b6a4b390f1d185b49ad69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43af2a4d55cddbfb8a9340a57b35cbfec2d5913fc70b6a4b390f1d185b49ad69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43af2a4d55cddbfb8a9340a57b35cbfec2d5913fc70b6a4b390f1d185b49ad69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:47 np0005596060 podman[101131]: 2026-01-26 17:44:47.558465365 +0000 UTC m=+0.167740575 container init 84dfd80d237c403ff6066abaaa55dbdf1b8b66b3d78acf315d4be28e24efa0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poincare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:44:47 np0005596060 podman[101131]: 2026-01-26 17:44:47.571195958 +0000 UTC m=+0.180471168 container start 84dfd80d237c403ff6066abaaa55dbdf1b8b66b3d78acf315d4be28e24efa0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 12:44:47 np0005596060 podman[101131]: 2026-01-26 17:44:47.575058775 +0000 UTC m=+0.184333985 container attach 84dfd80d237c403ff6066abaaa55dbdf1b8b66b3d78acf315d4be28e24efa0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poincare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 12:44:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:47.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:48.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:48 np0005596060 busy_poincare[101155]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:44:48 np0005596060 busy_poincare[101155]: --> relative data size: 1.0
Jan 26 12:44:48 np0005596060 busy_poincare[101155]: --> All data devices are unavailable
Jan 26 12:44:48 np0005596060 systemd[1]: libpod-84dfd80d237c403ff6066abaaa55dbdf1b8b66b3d78acf315d4be28e24efa0e6.scope: Deactivated successfully.
Jan 26 12:44:48 np0005596060 podman[101131]: 2026-01-26 17:44:48.447384309 +0000 UTC m=+1.056659559 container died 84dfd80d237c403ff6066abaaa55dbdf1b8b66b3d78acf315d4be28e24efa0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 12:44:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.5 KiB/s rd, 171 B/s wr, 15 op/s; 36 B/s, 2 objects/s recovering
Jan 26 12:44:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Jan 26 12:44:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 26 12:44:49 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 26 12:44:49 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:49.273116) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 12:44:49 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 26 12:44:49 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449489273356, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7487, "num_deletes": 251, "total_data_size": 9616032, "memory_usage": 9786688, "flush_reason": "Manual Compaction"}
Jan 26 12:44:49 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 26 12:44:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:49.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay-43af2a4d55cddbfb8a9340a57b35cbfec2d5913fc70b6a4b390f1d185b49ad69-merged.mount: Deactivated successfully.
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449490074237, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7841308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 145, "largest_seqno": 7623, "table_properties": {"data_size": 7813496, "index_size": 18289, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 79058, "raw_average_key_size": 23, "raw_value_size": 7748028, "raw_average_value_size": 2295, "num_data_blocks": 806, "num_entries": 3376, "num_filter_entries": 3376, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449134, "oldest_key_time": 1769449134, "file_creation_time": 1769449489, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 801264 microseconds, and 29336 cpu microseconds.
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:50.074388) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7841308 bytes OK
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:50.074438) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:50.077916) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:50.077955) EVENT_LOG_v1 {"time_micros": 1769449490077948, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:50.077981) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9583268, prev total WAL file size 9584772, number of live WAL files 2.
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:50.081475) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7657KB) 13(53KB) 8(1944B)]
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449490081608, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7898101, "oldest_snapshot_seqno": -1}
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3188 keys, 7853681 bytes, temperature: kUnknown
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449490140630, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7853681, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7826229, "index_size": 18382, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 76963, "raw_average_key_size": 24, "raw_value_size": 7762451, "raw_average_value_size": 2434, "num_data_blocks": 813, "num_entries": 3188, "num_filter_entries": 3188, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769449490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:44:50 np0005596060 podman[101131]: 2026-01-26 17:44:50.141506048 +0000 UTC m=+2.750781268 container remove 84dfd80d237c403ff6066abaaa55dbdf1b8b66b3d78acf315d4be28e24efa0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_poincare, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:50.141006) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7853681 bytes
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:50.143255) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.6 rd, 132.8 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3480, records dropped: 292 output_compression: NoCompression
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:50.143306) EVENT_LOG_v1 {"time_micros": 1769449490143284, "job": 4, "event": "compaction_finished", "compaction_time_micros": 59139, "compaction_time_cpu_micros": 19269, "output_level": 6, "num_output_files": 1, "total_output_size": 7853681, "num_input_records": 3480, "num_output_records": 3188, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449490146937, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449490147047, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449490147107, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:44:50.081322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:44:50 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 123 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=87/87 les/c/f=88/88/0 sis=123) [1] r=0 lpr=123 pi=[87,123)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:44:50 np0005596060 systemd[1]: libpod-conmon-84dfd80d237c403ff6066abaaa55dbdf1b8b66b3d78acf315d4be28e24efa0e6.scope: Deactivated successfully.
Jan 26 12:44:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:50.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 26 12:44:50 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 124 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=87/87 les/c/f=88/88/0 sis=124) [1]/[0] r=-1 lpr=124 pi=[87,124)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:50 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 124 pg[9.1a( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=87/87 les/c/f=88/88/0 sis=124) [1]/[0] r=-1 lpr=124 pi=[87,124)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 26 12:44:50 np0005596060 podman[101526]: 2026-01-26 17:44:50.835200923 +0000 UTC m=+0.052981422 container create 36ce35907a95ef7413ceb14ace28dd1654b2d1fb70b73e2d0c7d37ea4ed4bb72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ritchie, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 12:44:50 np0005596060 python3.9[101498]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:44:50 np0005596060 systemd[75887]: Created slice User Background Tasks Slice.
Jan 26 12:44:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 8.9 KiB/s rd, 178 B/s wr, 16 op/s; 38 B/s, 2 objects/s recovering
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Jan 26 12:44:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 26 12:44:50 np0005596060 systemd[75887]: Starting Cleanup of User's Temporary Files and Directories...
Jan 26 12:44:50 np0005596060 systemd[1]: Started libpod-conmon-36ce35907a95ef7413ceb14ace28dd1654b2d1fb70b73e2d0c7d37ea4ed4bb72.scope.
Jan 26 12:44:50 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:44:50 np0005596060 systemd[75887]: Finished Cleanup of User's Temporary Files and Directories.
Jan 26 12:44:50 np0005596060 podman[101526]: 2026-01-26 17:44:50.813804821 +0000 UTC m=+0.031585330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:44:50 np0005596060 podman[101526]: 2026-01-26 17:44:50.904340422 +0000 UTC m=+0.122120921 container init 36ce35907a95ef7413ceb14ace28dd1654b2d1fb70b73e2d0c7d37ea4ed4bb72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ritchie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 12:44:50 np0005596060 podman[101526]: 2026-01-26 17:44:50.911394801 +0000 UTC m=+0.129175290 container start 36ce35907a95ef7413ceb14ace28dd1654b2d1fb70b73e2d0c7d37ea4ed4bb72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ritchie, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:44:50 np0005596060 podman[101526]: 2026-01-26 17:44:50.915399262 +0000 UTC m=+0.133179801 container attach 36ce35907a95ef7413ceb14ace28dd1654b2d1fb70b73e2d0c7d37ea4ed4bb72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ritchie, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:44:50 np0005596060 infallible_ritchie[101543]: 167 167
Jan 26 12:44:50 np0005596060 systemd[1]: libpod-36ce35907a95ef7413ceb14ace28dd1654b2d1fb70b73e2d0c7d37ea4ed4bb72.scope: Deactivated successfully.
Jan 26 12:44:50 np0005596060 podman[101526]: 2026-01-26 17:44:50.918844209 +0000 UTC m=+0.136624698 container died 36ce35907a95ef7413ceb14ace28dd1654b2d1fb70b73e2d0c7d37ea4ed4bb72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ritchie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 12:44:50 np0005596060 systemd[1]: var-lib-containers-storage-overlay-42e71e9176713e507a9cbce4ab297cc1bbdbcbab7e420a47ab042faf706b8d47-merged.mount: Deactivated successfully.
Jan 26 12:44:50 np0005596060 podman[101526]: 2026-01-26 17:44:50.973214105 +0000 UTC m=+0.190994594 container remove 36ce35907a95ef7413ceb14ace28dd1654b2d1fb70b73e2d0c7d37ea4ed4bb72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_ritchie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:44:50 np0005596060 systemd[1]: libpod-conmon-36ce35907a95ef7413ceb14ace28dd1654b2d1fb70b73e2d0c7d37ea4ed4bb72.scope: Deactivated successfully.
Jan 26 12:44:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 26 12:44:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 26 12:44:51 np0005596060 podman[101593]: 2026-01-26 17:44:51.184146403 +0000 UTC m=+0.064762780 container create 61368c057e217913b719848d4dd3ae0bbd1778bc18b4e57f738f93ddfe7d2760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 26 12:44:51 np0005596060 systemd[1]: Started libpod-conmon-61368c057e217913b719848d4dd3ae0bbd1778bc18b4e57f738f93ddfe7d2760.scope.
Jan 26 12:44:51 np0005596060 podman[101593]: 2026-01-26 17:44:51.158506264 +0000 UTC m=+0.039122641 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:44:51 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:44:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b224f0fb5b6774af37308e8903363f84d7eb433ec7860ca050cdc128c969204c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b224f0fb5b6774af37308e8903363f84d7eb433ec7860ca050cdc128c969204c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b224f0fb5b6774af37308e8903363f84d7eb433ec7860ca050cdc128c969204c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b224f0fb5b6774af37308e8903363f84d7eb433ec7860ca050cdc128c969204c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:51 np0005596060 podman[101593]: 2026-01-26 17:44:51.292132995 +0000 UTC m=+0.172749412 container init 61368c057e217913b719848d4dd3ae0bbd1778bc18b4e57f738f93ddfe7d2760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_albattani, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 12:44:51 np0005596060 podman[101593]: 2026-01-26 17:44:51.300771424 +0000 UTC m=+0.181387761 container start 61368c057e217913b719848d4dd3ae0bbd1778bc18b4e57f738f93ddfe7d2760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_albattani, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:44:51 np0005596060 podman[101593]: 2026-01-26 17:44:51.304651062 +0000 UTC m=+0.185267579 container attach 61368c057e217913b719848d4dd3ae0bbd1778bc18b4e57f738f93ddfe7d2760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 12:44:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 26 12:44:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 26 12:44:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 26 12:44:51 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 26 12:44:51 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 125 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=67/67 les/c/f=68/68/0 sis=125) [1] r=0 lpr=125 pi=[67,125)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:44:51 np0005596060 python3.9[101739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:44:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:44:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:51.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]: {
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:    "1": [
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:        {
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "devices": [
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "/dev/loop3"
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            ],
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "lv_name": "ceph_lv0",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "lv_size": "7511998464",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "name": "ceph_lv0",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "tags": {
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.cluster_name": "ceph",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.crush_device_class": "",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.encrypted": "0",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.osd_id": "1",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.type": "block",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:                "ceph.vdo": "0"
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            },
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "type": "block",
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:            "vg_name": "ceph_vg0"
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:        }
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]:    ]
Jan 26 12:44:52 np0005596060 sharp_albattani[101640]: }
Jan 26 12:44:52 np0005596060 systemd[1]: libpod-61368c057e217913b719848d4dd3ae0bbd1778bc18b4e57f738f93ddfe7d2760.scope: Deactivated successfully.
Jan 26 12:44:52 np0005596060 podman[101772]: 2026-01-26 17:44:52.226362236 +0000 UTC m=+0.029874477 container died 61368c057e217913b719848d4dd3ae0bbd1778bc18b4e57f738f93ddfe7d2760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_albattani, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 26 12:44:52 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b224f0fb5b6774af37308e8903363f84d7eb433ec7860ca050cdc128c969204c-merged.mount: Deactivated successfully.
Jan 26 12:44:52 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 26 12:44:52 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 26 12:44:52 np0005596060 podman[101772]: 2026-01-26 17:44:52.286697232 +0000 UTC m=+0.090209463 container remove 61368c057e217913b719848d4dd3ae0bbd1778bc18b4e57f738f93ddfe7d2760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_albattani, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 12:44:52 np0005596060 systemd[1]: libpod-conmon-61368c057e217913b719848d4dd3ae0bbd1778bc18b4e57f738f93ddfe7d2760.scope: Deactivated successfully.
Jan 26 12:44:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:52.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 26 12:44:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 26 12:44:52 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 26 12:44:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 126 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=0/0 n=5 ec=55/41 lis/c=124/87 les/c/f=125/88/0 sis=126) [1] r=0 lpr=126 pi=[87,126)/1 luod=0'0 crt=48'1155 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 126 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=0/0 n=5 ec=55/41 lis/c=124/87 les/c/f=125/88/0 sis=126) [1] r=0 lpr=126 pi=[87,126)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:44:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 126 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=67/67 les/c/f=68/68/0 sis=126) [1]/[2] r=-1 lpr=126 pi=[67,126)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:52 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 126 pg[9.1b( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=67/67 les/c/f=68/68/0 sis=126) [1]/[2] r=-1 lpr=126 pi=[67,126)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 26 12:44:52 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 26 12:44:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:44:53 np0005596060 podman[102055]: 2026-01-26 17:44:53.03310356 +0000 UTC m=+0.027170838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:44:53 np0005596060 podman[102055]: 2026-01-26 17:44:53.208705363 +0000 UTC m=+0.202772651 container create 9130f86c97bdd3bd93947a5c7544ce4cbb3f12df9aaa5ce71b189dfb6ce9cfcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:44:53 np0005596060 python3.9[102047]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:44:53 np0005596060 systemd[1]: Started libpod-conmon-9130f86c97bdd3bd93947a5c7544ce4cbb3f12df9aaa5ce71b189dfb6ce9cfcd.scope.
Jan 26 12:44:53 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:44:53 np0005596060 podman[102055]: 2026-01-26 17:44:53.318163063 +0000 UTC m=+0.312230381 container init 9130f86c97bdd3bd93947a5c7544ce4cbb3f12df9aaa5ce71b189dfb6ce9cfcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:44:53 np0005596060 podman[102055]: 2026-01-26 17:44:53.331461699 +0000 UTC m=+0.325528987 container start 9130f86c97bdd3bd93947a5c7544ce4cbb3f12df9aaa5ce71b189dfb6ce9cfcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 12:44:53 np0005596060 charming_brown[102076]: 167 167
Jan 26 12:44:53 np0005596060 systemd[1]: libpod-9130f86c97bdd3bd93947a5c7544ce4cbb3f12df9aaa5ce71b189dfb6ce9cfcd.scope: Deactivated successfully.
Jan 26 12:44:53 np0005596060 podman[102055]: 2026-01-26 17:44:53.354697207 +0000 UTC m=+0.348764505 container attach 9130f86c97bdd3bd93947a5c7544ce4cbb3f12df9aaa5ce71b189dfb6ce9cfcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:44:53 np0005596060 podman[102055]: 2026-01-26 17:44:53.355763624 +0000 UTC m=+0.349830902 container died 9130f86c97bdd3bd93947a5c7544ce4cbb3f12df9aaa5ce71b189dfb6ce9cfcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:44:53 np0005596060 systemd[1]: var-lib-containers-storage-overlay-cc6e111dab9ffac9b5da49a6c69eec5571a75d9bd9008c34405574aa5ce0be0b-merged.mount: Deactivated successfully.
Jan 26 12:44:53 np0005596060 podman[102055]: 2026-01-26 17:44:53.42078412 +0000 UTC m=+0.414851418 container remove 9130f86c97bdd3bd93947a5c7544ce4cbb3f12df9aaa5ce71b189dfb6ce9cfcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 12:44:53 np0005596060 systemd[1]: libpod-conmon-9130f86c97bdd3bd93947a5c7544ce4cbb3f12df9aaa5ce71b189dfb6ce9cfcd.scope: Deactivated successfully.
Jan 26 12:44:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 26 12:44:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 26 12:44:53 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 26 12:44:53 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 127 pg[9.1a( v 48'1155 (0'0,48'1155] local-lis/les=126/127 n=5 ec=55/41 lis/c=124/87 les/c/f=125/88/0 sis=126) [1] r=0 lpr=126 pi=[87,126)/1 crt=48'1155 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:44:53 np0005596060 podman[102128]: 2026-01-26 17:44:53.681712652 +0000 UTC m=+0.082425166 container create 1fcf036a9cd11a97b964fb4c541ce866c76915325991352b585892e32b50bc66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 12:44:53 np0005596060 podman[102128]: 2026-01-26 17:44:53.63063669 +0000 UTC m=+0.031349244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:44:53 np0005596060 systemd[1]: Started libpod-conmon-1fcf036a9cd11a97b964fb4c541ce866c76915325991352b585892e32b50bc66.scope.
Jan 26 12:44:53 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:44:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcc2323f17aac846a163010a1fae0e9538d7ec0c67a273c932769a818827bda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcc2323f17aac846a163010a1fae0e9538d7ec0c67a273c932769a818827bda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcc2323f17aac846a163010a1fae0e9538d7ec0c67a273c932769a818827bda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdcc2323f17aac846a163010a1fae0e9538d7ec0c67a273c932769a818827bda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:44:53 np0005596060 podman[102128]: 2026-01-26 17:44:53.8025513 +0000 UTC m=+0.203263874 container init 1fcf036a9cd11a97b964fb4c541ce866c76915325991352b585892e32b50bc66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:44:53 np0005596060 podman[102128]: 2026-01-26 17:44:53.81401065 +0000 UTC m=+0.214723144 container start 1fcf036a9cd11a97b964fb4c541ce866c76915325991352b585892e32b50bc66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:44:53 np0005596060 podman[102128]: 2026-01-26 17:44:53.818381371 +0000 UTC m=+0.219093865 container attach 1fcf036a9cd11a97b964fb4c541ce866c76915325991352b585892e32b50bc66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 12:44:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:44:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:44:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:54.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:54 np0005596060 python3.9[102326]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:44:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 26 12:44:54 np0005596060 musing_bouman[102144]: {
Jan 26 12:44:54 np0005596060 musing_bouman[102144]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:44:54 np0005596060 musing_bouman[102144]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:44:54 np0005596060 musing_bouman[102144]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:44:54 np0005596060 musing_bouman[102144]:        "osd_id": 1,
Jan 26 12:44:54 np0005596060 musing_bouman[102144]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:44:54 np0005596060 musing_bouman[102144]:        "type": "bluestore"
Jan 26 12:44:54 np0005596060 musing_bouman[102144]:    }
Jan 26 12:44:54 np0005596060 musing_bouman[102144]: }
Jan 26 12:44:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 26 12:44:54 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 26 12:44:54 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 128 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=0/0 n=5 ec=55/41 lis/c=126/67 les/c/f=127/68/0 sis=128) [1] r=0 lpr=128 pi=[67,128)/1 luod=0'0 crt=48'1155 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:44:54 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 128 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=0/0 n=5 ec=55/41 lis/c=126/67 les/c/f=127/68/0 sis=128) [1] r=0 lpr=128 pi=[67,128)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:44:54 np0005596060 systemd[1]: libpod-1fcf036a9cd11a97b964fb4c541ce866c76915325991352b585892e32b50bc66.scope: Deactivated successfully.
Jan 26 12:44:54 np0005596060 podman[102128]: 2026-01-26 17:44:54.78139926 +0000 UTC m=+1.182111764 container died 1fcf036a9cd11a97b964fb4c541ce866c76915325991352b585892e32b50bc66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 12:44:54 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fdcc2323f17aac846a163010a1fae0e9538d7ec0c67a273c932769a818827bda-merged.mount: Deactivated successfully.
Jan 26 12:44:54 np0005596060 podman[102128]: 2026-01-26 17:44:54.84619904 +0000 UTC m=+1.246911524 container remove 1fcf036a9cd11a97b964fb4c541ce866c76915325991352b585892e32b50bc66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:44:54 np0005596060 systemd[1]: libpod-conmon-1fcf036a9cd11a97b964fb4c541ce866c76915325991352b585892e32b50bc66.scope: Deactivated successfully.
Jan 26 12:44:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:44:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:44:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:44:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 46f85e77-b682-463c-b56a-4d2fe75553f9 does not exist
Jan 26 12:44:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 822216c4-a1ea-4816-b416-5aa8bdea6096 does not exist
Jan 26 12:44:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b98eb56f-fecd-4399-9a63-8dff4e84d07c does not exist
Jan 26 12:44:55 np0005596060 python3.9[102491]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:44:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:44:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:55.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 26 12:44:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:44:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 26 12:44:56 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 26 12:44:56 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 129 pg[9.1b( v 48'1155 (0'0,48'1155] local-lis/les=128/129 n=5 ec=55/41 lis/c=126/67 les/c/f=127/68/0 sis=128) [1] r=0 lpr=128 pi=[67,128)/1 crt=48'1155 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:44:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:56.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 1 unknown, 1 remapped+peering, 303 active+clean; 457 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:44:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:57.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:44:58.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.8 KiB/s rd, 170 B/s wr, 15 op/s; 54 B/s, 1 objects/s recovering
Jan 26 12:44:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Jan 26 12:44:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 26 12:44:59 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 26 12:44:59 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 26 12:44:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 26 12:44:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 26 12:44:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:44:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:44:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:44:59.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:44:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 26 12:44:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 26 12:44:59 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 26 12:45:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:00.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:45:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 26 12:45:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 167 B/s wr, 15 op/s; 54 B/s, 1 objects/s recovering
Jan 26 12:45:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Jan 26 12:45:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 26 12:45:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:01.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 26 12:45:02 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 26 12:45:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 26 12:45:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 26 12:45:02 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 26 12:45:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:02.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 7.9 KiB/s rd, 152 B/s wr, 14 op/s; 49 B/s, 1 objects/s recovering
Jan 26 12:45:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Jan 26 12:45:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 26 12:45:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:45:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:45:03 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 26 12:45:03 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 26 12:45:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 26 12:45:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 26 12:45:03 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 26 12:45:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:03.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:04.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 26 12:45:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 26 12:45:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:45:04 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 132 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=74/74 les/c/f=75/75/0 sis=132) [1] r=0 lpr=132 pi=[74,132)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:45:05 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 26 12:45:05 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 26 12:45:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 12:45:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 26 12:45:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:45:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 26 12:45:05 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 26 12:45:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 133 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=74/74 les/c/f=75/75/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[74,133)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:45:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 133 pg[9.1e( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=74/74 les/c/f=75/75/0 sis=133) [1]/[0] r=-1 lpr=133 pi=[74,133)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 26 12:45:05 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=96/96 les/c/f=97/97/0 sis=133) [1] r=0 lpr=133 pi=[96,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:45:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 26 12:45:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 26 12:45:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:05.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:06.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 26 12:45:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 26 12:45:06 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 26 12:45:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=96/96 les/c/f=97/97/0 sis=134) [1]/[0] r=-1 lpr=134 pi=[96,134)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:45:06 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=55/41 lis/c=96/96 les/c/f=97/97/0 sis=134) [1]/[0] r=-1 lpr=134 pi=[96,134)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 26 12:45:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 2 remapped+peering, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 26 12:45:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 26 12:45:07 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 135 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=0/0 n=5 ec=55/41 lis/c=133/74 les/c/f=134/75/0 sis=135) [1] r=0 lpr=135 pi=[74,135)/1 luod=0'0 crt=48'1155 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:45:07 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 135 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=0/0 n=5 ec=55/41 lis/c=133/74 les/c/f=134/75/0 sis=135) [1] r=0 lpr=135 pi=[74,135)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:45:07 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 26 12:45:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:07.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:08 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 26 12:45:08 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 26 12:45:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:08.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 26 12:45:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 2 remapped+peering, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 26 12:45:09 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 26 12:45:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 136 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=0/0 n=5 ec=55/41 lis/c=134/96 les/c/f=135/97/0 sis=136) [1] r=0 lpr=136 pi=[96,136)/1 luod=0'0 crt=48'1155 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 26 12:45:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 136 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=0/0 n=5 ec=55/41 lis/c=134/96 les/c/f=135/97/0 sis=136) [1] r=0 lpr=136 pi=[96,136)/1 crt=48'1155 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 26 12:45:09 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 136 pg[9.1e( v 48'1155 (0'0,48'1155] local-lis/les=135/136 n=5 ec=55/41 lis/c=133/74 les/c/f=134/75/0 sis=135) [1] r=0 lpr=135 pi=[74,135)/1 crt=48'1155 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:45:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:09.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 26 12:45:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 26 12:45:10 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 26 12:45:10 np0005596060 ceph-osd[84834]: osd.1 pg_epoch: 137 pg[9.1f( v 48'1155 (0'0,48'1155] local-lis/les=136/137 n=5 ec=55/41 lis/c=134/96 les/c/f=135/97/0 sis=136) [1] r=0 lpr=136 pi=[96,136)/1 crt=48'1155 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 26 12:45:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:10.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:45:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 2 remapped+peering, 303 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:11 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 26 12:45:11 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 26 12:45:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:11.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:12.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 26 12:45:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:13.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:45:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:14.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 47 B/s, 2 objects/s recovering
Jan 26 12:45:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:45:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:45:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:15.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:45:16 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 26 12:45:16 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 26 12:45:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:16.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 41 B/s, 1 objects/s recovering
Jan 26 12:45:17 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 26 12:45:17 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 26 12:45:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:45:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:17.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:45:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:18.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 33 B/s, 1 objects/s recovering
Jan 26 12:45:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:19.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:20 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 26 12:45:20 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 26 12:45:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:20.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:45:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 30 B/s, 1 objects/s recovering
Jan 26 12:45:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:21.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:22.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 1 objects/s recovering
Jan 26 12:45:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:23.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:24.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:45:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:25.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:26 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 26 12:45:26 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 26 12:45:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:26.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:27.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:28.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:29.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:30.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:45:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:32 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 26 12:45:32 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 26 12:45:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:32.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:45:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:33.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:45:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:34.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:35 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 26 12:45:35 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 26 12:45:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:45:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:35.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:37 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 26 12:45:37 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 26 12:45:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:37.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:38 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 26 12:45:38 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 26 12:45:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:38.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:45:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:39.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:45:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:40.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:45:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:41 np0005596060 python3.9[102912]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:45:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:41.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:42.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:43 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 26 12:45:43 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 26 12:45:43 np0005596060 python3.9[103200]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 26 12:45:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:45:43
Jan 26 12:45:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:45:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:45:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 26 12:45:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:45:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:43.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:45:44 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.c deep-scrub starts
Jan 26 12:45:44 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.c deep-scrub ok
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:45:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:44.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:44 np0005596060 python3.9[103352]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 26 12:45:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:45 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 26 12:45:45 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 26 12:45:45 np0005596060 python3.9[103505]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:45:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:45:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:45.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:46.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:46 np0005596060 python3.9[103657]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 26 12:45:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:47.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:48 np0005596060 python3.9[103810]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:45:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:45:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:48.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:45:48 np0005596060 python3.9[103962]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:45:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:49 np0005596060 python3.9[104041]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:45:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:50.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:45:50 np0005596060 python3.9[104193]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:45:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:51 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 26 12:45:51 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 26 12:45:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:52 np0005596060 python3.9[104348]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 26 12:45:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:52.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:53 np0005596060 python3.9[104502]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 26 12:45:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:53.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:54 np0005596060 python3.9[104655]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 12:45:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:54.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:54 np0005596060 python3.9[104857]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 26 12:45:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:45:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:55.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:56 np0005596060 python3.9[105134]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:45:56 np0005596060 podman[105181]: 2026-01-26 17:45:56.512812811 +0000 UTC m=+0.396928597 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:45:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:45:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:56.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:45:56 np0005596060 podman[105181]: 2026-01-26 17:45:56.628633379 +0000 UTC m=+0.512749155 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 12:45:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:45:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:45:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:57 np0005596060 podman[105333]: 2026-01-26 17:45:57.377639966 +0000 UTC m=+0.063707491 container exec e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:45:57 np0005596060 podman[105333]: 2026-01-26 17:45:57.388681106 +0000 UTC m=+0.074748611 container exec_died e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:45:57 np0005596060 podman[105420]: 2026-01-26 17:45:57.603932517 +0000 UTC m=+0.059145116 container exec 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.28.2, description=keepalived for Ceph, distribution-scope=public, name=keepalived, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-type=git)
Jan 26 12:45:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:45:57 np0005596060 podman[105439]: 2026-01-26 17:45:57.722510915 +0000 UTC m=+0.094822259 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=keepalived-container, release=1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=Ceph keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived)
Jan 26 12:45:57 np0005596060 podman[105420]: 2026-01-26 17:45:57.828661849 +0000 UTC m=+0.283874428 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, description=keepalived for Ceph, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 26 12:45:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:45:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:45:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:45:58.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:58 np0005596060 python3.9[105579]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:45:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:45:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:45:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:45:58.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:45:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:45:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:45:59 np0005596060 python3.9[105844]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:59 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9ae5f0de-ace9-4937-b64e-73178f7aae2d does not exist
Jan 26 12:45:59 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ba0f3bba-6aeb-4ecf-97fd-3197ee1e8929 does not exist
Jan 26 12:45:59 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b6f108f9-e62f-44c4-8dd0-2a5caf3237df does not exist
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:45:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:46:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:00.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:00 np0005596060 python3.9[106014]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:46:00 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 26 12:46:00 np0005596060 podman[106106]: 2026-01-26 17:46:00.291922778 +0000 UTC m=+0.053035362 container create 294727bf0aed8750d8d0d4fc8b7b36a34ef7d4d94f2921ede4057a73c4293703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cohen, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 26 12:46:00 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 26 12:46:00 np0005596060 systemd[1]: Started libpod-conmon-294727bf0aed8750d8d0d4fc8b7b36a34ef7d4d94f2921ede4057a73c4293703.scope.
Jan 26 12:46:00 np0005596060 podman[106106]: 2026-01-26 17:46:00.264497925 +0000 UTC m=+0.025610599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:46:00 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:46:00 np0005596060 podman[106106]: 2026-01-26 17:46:00.394087061 +0000 UTC m=+0.155199735 container init 294727bf0aed8750d8d0d4fc8b7b36a34ef7d4d94f2921ede4057a73c4293703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:46:00 np0005596060 podman[106106]: 2026-01-26 17:46:00.406151316 +0000 UTC m=+0.167263910 container start 294727bf0aed8750d8d0d4fc8b7b36a34ef7d4d94f2921ede4057a73c4293703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 12:46:00 np0005596060 podman[106106]: 2026-01-26 17:46:00.410810934 +0000 UTC m=+0.171923608 container attach 294727bf0aed8750d8d0d4fc8b7b36a34ef7d4d94f2921ede4057a73c4293703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cohen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:46:00 np0005596060 elastic_cohen[106172]: 167 167
Jan 26 12:46:00 np0005596060 systemd[1]: libpod-294727bf0aed8750d8d0d4fc8b7b36a34ef7d4d94f2921ede4057a73c4293703.scope: Deactivated successfully.
Jan 26 12:46:00 np0005596060 podman[106106]: 2026-01-26 17:46:00.414693382 +0000 UTC m=+0.175805996 container died 294727bf0aed8750d8d0d4fc8b7b36a34ef7d4d94f2921ede4057a73c4293703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:46:00 np0005596060 systemd[1]: var-lib-containers-storage-overlay-321a401807ce83a2640200c946c10f09f948a91aac6b64f01e51f72d67c72221-merged.mount: Deactivated successfully.
Jan 26 12:46:00 np0005596060 podman[106106]: 2026-01-26 17:46:00.470594786 +0000 UTC m=+0.231707380 container remove 294727bf0aed8750d8d0d4fc8b7b36a34ef7d4d94f2921ede4057a73c4293703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 12:46:00 np0005596060 systemd[1]: libpod-conmon-294727bf0aed8750d8d0d4fc8b7b36a34ef7d4d94f2921ede4057a73c4293703.scope: Deactivated successfully.
Jan 26 12:46:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:00.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:00 np0005596060 podman[106267]: 2026-01-26 17:46:00.617711795 +0000 UTC m=+0.037743965 container create dcfaa53cda0681d50f9fafb4e48c2c2ca049d81283ceb3e8682674a0a343457a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_yalow, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 12:46:00 np0005596060 systemd[1]: Started libpod-conmon-dcfaa53cda0681d50f9fafb4e48c2c2ca049d81283ceb3e8682674a0a343457a.scope.
Jan 26 12:46:00 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:46:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27459335f1c4cc67af9ff287a0b3930ee120dd77ec40c6d83ead58a7248ca74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27459335f1c4cc67af9ff287a0b3930ee120dd77ec40c6d83ead58a7248ca74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27459335f1c4cc67af9ff287a0b3930ee120dd77ec40c6d83ead58a7248ca74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27459335f1c4cc67af9ff287a0b3930ee120dd77ec40c6d83ead58a7248ca74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f27459335f1c4cc67af9ff287a0b3930ee120dd77ec40c6d83ead58a7248ca74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:00 np0005596060 podman[106267]: 2026-01-26 17:46:00.602208253 +0000 UTC m=+0.022240463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:46:00 np0005596060 podman[106267]: 2026-01-26 17:46:00.705902435 +0000 UTC m=+0.125934625 container init dcfaa53cda0681d50f9fafb4e48c2c2ca049d81283ceb3e8682674a0a343457a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:46:00 np0005596060 podman[106267]: 2026-01-26 17:46:00.715824376 +0000 UTC m=+0.135856546 container start dcfaa53cda0681d50f9fafb4e48c2c2ca049d81283ceb3e8682674a0a343457a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_yalow, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 12:46:00 np0005596060 podman[106267]: 2026-01-26 17:46:00.719835817 +0000 UTC m=+0.139868047 container attach dcfaa53cda0681d50f9fafb4e48c2c2ca049d81283ceb3e8682674a0a343457a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:46:00 np0005596060 python3.9[106287]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:46:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:01 np0005596060 python3.9[106374]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:46:01 np0005596060 gifted_yalow[106291]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:46:01 np0005596060 gifted_yalow[106291]: --> relative data size: 1.0
Jan 26 12:46:01 np0005596060 gifted_yalow[106291]: --> All data devices are unavailable
Jan 26 12:46:01 np0005596060 systemd[1]: libpod-dcfaa53cda0681d50f9fafb4e48c2c2ca049d81283ceb3e8682674a0a343457a.scope: Deactivated successfully.
Jan 26 12:46:01 np0005596060 podman[106267]: 2026-01-26 17:46:01.57907079 +0000 UTC m=+0.999102980 container died dcfaa53cda0681d50f9fafb4e48c2c2ca049d81283ceb3e8682674a0a343457a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:46:01 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f27459335f1c4cc67af9ff287a0b3930ee120dd77ec40c6d83ead58a7248ca74-merged.mount: Deactivated successfully.
Jan 26 12:46:01 np0005596060 podman[106267]: 2026-01-26 17:46:01.643828778 +0000 UTC m=+1.063860978 container remove dcfaa53cda0681d50f9fafb4e48c2c2ca049d81283ceb3e8682674a0a343457a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_yalow, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:46:01 np0005596060 systemd[1]: libpod-conmon-dcfaa53cda0681d50f9fafb4e48c2c2ca049d81283ceb3e8682674a0a343457a.scope: Deactivated successfully.
Jan 26 12:46:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:02.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:02 np0005596060 podman[106637]: 2026-01-26 17:46:02.335372892 +0000 UTC m=+0.042464524 container create b8780f9b79df456b7ae0a35251a2c7f7e23fb65880ecbe9fd6a4d8164ed26e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackburn, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:46:02 np0005596060 systemd[1]: Started libpod-conmon-b8780f9b79df456b7ae0a35251a2c7f7e23fb65880ecbe9fd6a4d8164ed26e8d.scope.
Jan 26 12:46:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:46:02 np0005596060 podman[106637]: 2026-01-26 17:46:02.31749001 +0000 UTC m=+0.024581672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:46:02 np0005596060 podman[106637]: 2026-01-26 17:46:02.430290232 +0000 UTC m=+0.137381884 container init b8780f9b79df456b7ae0a35251a2c7f7e23fb65880ecbe9fd6a4d8164ed26e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackburn, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 12:46:02 np0005596060 podman[106637]: 2026-01-26 17:46:02.44011077 +0000 UTC m=+0.147202402 container start b8780f9b79df456b7ae0a35251a2c7f7e23fb65880ecbe9fd6a4d8164ed26e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:46:02 np0005596060 podman[106637]: 2026-01-26 17:46:02.444596844 +0000 UTC m=+0.151688476 container attach b8780f9b79df456b7ae0a35251a2c7f7e23fb65880ecbe9fd6a4d8164ed26e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackburn, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 26 12:46:02 np0005596060 great_blackburn[106686]: 167 167
Jan 26 12:46:02 np0005596060 systemd[1]: libpod-b8780f9b79df456b7ae0a35251a2c7f7e23fb65880ecbe9fd6a4d8164ed26e8d.scope: Deactivated successfully.
Jan 26 12:46:02 np0005596060 podman[106637]: 2026-01-26 17:46:02.448478272 +0000 UTC m=+0.155569904 container died b8780f9b79df456b7ae0a35251a2c7f7e23fb65880ecbe9fd6a4d8164ed26e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 12:46:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fe5f6fb22b43401ece8e4c84457e47b6e50f28dffa24589cd06fe4d797352bd6-merged.mount: Deactivated successfully.
Jan 26 12:46:02 np0005596060 podman[106637]: 2026-01-26 17:46:02.48797102 +0000 UTC m=+0.195062652 container remove b8780f9b79df456b7ae0a35251a2c7f7e23fb65880ecbe9fd6a4d8164ed26e8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 12:46:02 np0005596060 systemd[1]: libpod-conmon-b8780f9b79df456b7ae0a35251a2c7f7e23fb65880ecbe9fd6a4d8164ed26e8d.scope: Deactivated successfully.
Jan 26 12:46:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:02.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:02 np0005596060 python3.9[106709]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:46:02 np0005596060 podman[106730]: 2026-01-26 17:46:02.66950321 +0000 UTC m=+0.045496591 container create b0588ca49e64bf20c3ce9fa13e16070353fbf4471f69f116ff1af2ad9a467760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:46:02 np0005596060 systemd[1]: Started libpod-conmon-b0588ca49e64bf20c3ce9fa13e16070353fbf4471f69f116ff1af2ad9a467760.scope.
Jan 26 12:46:02 np0005596060 podman[106730]: 2026-01-26 17:46:02.64970861 +0000 UTC m=+0.025701991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:46:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:46:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c816f8cc568e22210d8558011ca12f40d89b9dffddbcfcf04aa94700567bd39a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c816f8cc568e22210d8558011ca12f40d89b9dffddbcfcf04aa94700567bd39a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c816f8cc568e22210d8558011ca12f40d89b9dffddbcfcf04aa94700567bd39a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c816f8cc568e22210d8558011ca12f40d89b9dffddbcfcf04aa94700567bd39a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:02 np0005596060 podman[106730]: 2026-01-26 17:46:02.853976504 +0000 UTC m=+0.229969885 container init b0588ca49e64bf20c3ce9fa13e16070353fbf4471f69f116ff1af2ad9a467760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 12:46:02 np0005596060 podman[106730]: 2026-01-26 17:46:02.863122755 +0000 UTC m=+0.239116116 container start b0588ca49e64bf20c3ce9fa13e16070353fbf4471f69f116ff1af2ad9a467760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:46:02 np0005596060 podman[106730]: 2026-01-26 17:46:02.867126297 +0000 UTC m=+0.243119678 container attach b0588ca49e64bf20c3ce9fa13e16070353fbf4471f69f116ff1af2ad9a467760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 12:46:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:03 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 26 12:46:03 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:46:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]: {
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:    "1": [
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:        {
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "devices": [
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "/dev/loop3"
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            ],
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "lv_name": "ceph_lv0",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "lv_size": "7511998464",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "name": "ceph_lv0",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "tags": {
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.cluster_name": "ceph",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.crush_device_class": "",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.encrypted": "0",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.osd_id": "1",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.type": "block",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:                "ceph.vdo": "0"
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            },
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "type": "block",
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:            "vg_name": "ceph_vg0"
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:        }
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]:    ]
Jan 26 12:46:03 np0005596060 tender_dubinsky[106748]: }
Jan 26 12:46:03 np0005596060 systemd[1]: libpod-b0588ca49e64bf20c3ce9fa13e16070353fbf4471f69f116ff1af2ad9a467760.scope: Deactivated successfully.
Jan 26 12:46:03 np0005596060 podman[106730]: 2026-01-26 17:46:03.685317073 +0000 UTC m=+1.061310444 container died b0588ca49e64bf20c3ce9fa13e16070353fbf4471f69f116ff1af2ad9a467760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Jan 26 12:46:03 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c816f8cc568e22210d8558011ca12f40d89b9dffddbcfcf04aa94700567bd39a-merged.mount: Deactivated successfully.
Jan 26 12:46:03 np0005596060 podman[106730]: 2026-01-26 17:46:03.772698402 +0000 UTC m=+1.148691763 container remove b0588ca49e64bf20c3ce9fa13e16070353fbf4471f69f116ff1af2ad9a467760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_dubinsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:46:03 np0005596060 systemd[1]: libpod-conmon-b0588ca49e64bf20c3ce9fa13e16070353fbf4471f69f116ff1af2ad9a467760.scope: Deactivated successfully.
Jan 26 12:46:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:46:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:04.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:46:04 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 26 12:46:04 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 26 12:46:04 np0005596060 podman[106932]: 2026-01-26 17:46:04.462927164 +0000 UTC m=+0.045131082 container create 9268e3f5db6fb91aaf795eb1cf23a984402beb1a4ec92bd113f356fba566de72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 12:46:04 np0005596060 systemd[1]: Started libpod-conmon-9268e3f5db6fb91aaf795eb1cf23a984402beb1a4ec92bd113f356fba566de72.scope.
Jan 26 12:46:04 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:46:04 np0005596060 podman[106932]: 2026-01-26 17:46:04.444406575 +0000 UTC m=+0.026610513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:46:04 np0005596060 podman[106932]: 2026-01-26 17:46:04.54742643 +0000 UTC m=+0.129630348 container init 9268e3f5db6fb91aaf795eb1cf23a984402beb1a4ec92bd113f356fba566de72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:46:04 np0005596060 podman[106932]: 2026-01-26 17:46:04.556246843 +0000 UTC m=+0.138450761 container start 9268e3f5db6fb91aaf795eb1cf23a984402beb1a4ec92bd113f356fba566de72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:46:04 np0005596060 podman[106932]: 2026-01-26 17:46:04.559755052 +0000 UTC m=+0.141958970 container attach 9268e3f5db6fb91aaf795eb1cf23a984402beb1a4ec92bd113f356fba566de72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:46:04 np0005596060 beautiful_matsumoto[106949]: 167 167
Jan 26 12:46:04 np0005596060 systemd[1]: libpod-9268e3f5db6fb91aaf795eb1cf23a984402beb1a4ec92bd113f356fba566de72.scope: Deactivated successfully.
Jan 26 12:46:04 np0005596060 podman[106932]: 2026-01-26 17:46:04.564565563 +0000 UTC m=+0.146769481 container died 9268e3f5db6fb91aaf795eb1cf23a984402beb1a4ec92bd113f356fba566de72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 12:46:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:04.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-126da895e14c861b6e1f68e30a5cbc90755c62a69bd48e57fa10380ce91fc34f-merged.mount: Deactivated successfully.
Jan 26 12:46:04 np0005596060 podman[106932]: 2026-01-26 17:46:04.6059553 +0000 UTC m=+0.188159228 container remove 9268e3f5db6fb91aaf795eb1cf23a984402beb1a4ec92bd113f356fba566de72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:46:04 np0005596060 systemd[1]: libpod-conmon-9268e3f5db6fb91aaf795eb1cf23a984402beb1a4ec92bd113f356fba566de72.scope: Deactivated successfully.
Jan 26 12:46:04 np0005596060 podman[107002]: 2026-01-26 17:46:04.839340429 +0000 UTC m=+0.078677630 container create fc73af6d75c825883442a2d8f24d6480a849c48c8293136f263f314d454616b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 12:46:04 np0005596060 systemd[1]: Started libpod-conmon-fc73af6d75c825883442a2d8f24d6480a849c48c8293136f263f314d454616b1.scope.
Jan 26 12:46:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:04 np0005596060 podman[107002]: 2026-01-26 17:46:04.806422317 +0000 UTC m=+0.045759568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:46:04 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:46:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15136b855edb90cfc0e9b4e5b5eb9df2460b4f71f9ea3f07d61d3a2b0716cf8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15136b855edb90cfc0e9b4e5b5eb9df2460b4f71f9ea3f07d61d3a2b0716cf8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15136b855edb90cfc0e9b4e5b5eb9df2460b4f71f9ea3f07d61d3a2b0716cf8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15136b855edb90cfc0e9b4e5b5eb9df2460b4f71f9ea3f07d61d3a2b0716cf8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:46:04 np0005596060 podman[107002]: 2026-01-26 17:46:04.93191421 +0000 UTC m=+0.171251391 container init fc73af6d75c825883442a2d8f24d6480a849c48c8293136f263f314d454616b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:46:04 np0005596060 podman[107002]: 2026-01-26 17:46:04.93982191 +0000 UTC m=+0.179159081 container start fc73af6d75c825883442a2d8f24d6480a849c48c8293136f263f314d454616b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 12:46:04 np0005596060 podman[107002]: 2026-01-26 17:46:04.943433851 +0000 UTC m=+0.182771062 container attach fc73af6d75c825883442a2d8f24d6480a849c48c8293136f263f314d454616b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:46:05 np0005596060 python3.9[107120]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:46:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:05 np0005596060 gracious_lovelace[107068]: {
Jan 26 12:46:05 np0005596060 gracious_lovelace[107068]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:46:05 np0005596060 gracious_lovelace[107068]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:46:05 np0005596060 gracious_lovelace[107068]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:46:05 np0005596060 gracious_lovelace[107068]:        "osd_id": 1,
Jan 26 12:46:05 np0005596060 gracious_lovelace[107068]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:46:05 np0005596060 gracious_lovelace[107068]:        "type": "bluestore"
Jan 26 12:46:05 np0005596060 gracious_lovelace[107068]:    }
Jan 26 12:46:05 np0005596060 gracious_lovelace[107068]: }
Jan 26 12:46:05 np0005596060 systemd[1]: libpod-fc73af6d75c825883442a2d8f24d6480a849c48c8293136f263f314d454616b1.scope: Deactivated successfully.
Jan 26 12:46:05 np0005596060 podman[107002]: 2026-01-26 17:46:05.901025492 +0000 UTC m=+1.140362673 container died fc73af6d75c825883442a2d8f24d6480a849c48c8293136f263f314d454616b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 12:46:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:06.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:06 np0005596060 python3.9[107297]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 26 12:46:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:46:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:06.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:46:06 np0005596060 python3.9[107447]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:46:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:07 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 26 12:46:07 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 26 12:46:07 np0005596060 systemd[1]: var-lib-containers-storage-overlay-15136b855edb90cfc0e9b4e5b5eb9df2460b4f71f9ea3f07d61d3a2b0716cf8b-merged.mount: Deactivated successfully.
Jan 26 12:46:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:08.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:08 np0005596060 podman[107002]: 2026-01-26 17:46:08.154014365 +0000 UTC m=+3.393351546 container remove fc73af6d75c825883442a2d8f24d6480a849c48c8293136f263f314d454616b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:46:08 np0005596060 systemd[1]: libpod-conmon-fc73af6d75c825883442a2d8f24d6480a849c48c8293136f263f314d454616b1.scope: Deactivated successfully.
Jan 26 12:46:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:46:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:46:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:46:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:46:08 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e4674a86-e1c7-4377-bf25-8c05802766ce does not exist
Jan 26 12:46:08 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 421371c5-8134-489c-a902-c56c2e50f556 does not exist
Jan 26 12:46:08 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b24a4a91-2d54-4b00-88c2-4b7e74a0c5c7 does not exist
Jan 26 12:46:08 np0005596060 python3.9[107601]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:46:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:08.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:08 np0005596060 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 26 12:46:08 np0005596060 systemd[1]: tuned.service: Deactivated successfully.
Jan 26 12:46:08 np0005596060 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 26 12:46:08 np0005596060 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 26 12:46:08 np0005596060 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 26 12:46:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:46:09 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:46:09 np0005596060 python3.9[107814]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 26 12:46:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:10.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:46:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:10.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:46:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:11 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 26 12:46:11 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 26 12:46:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:12.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:12.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:13 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 26 12:46:13 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 26 12:46:13 np0005596060 python3.9[107968]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:46:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:14.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:46:14 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 26 12:46:14 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 26 12:46:14 np0005596060 python3.9[108122]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:46:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:14.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:15 np0005596060 systemd[1]: session-35.scope: Deactivated successfully.
Jan 26 12:46:15 np0005596060 systemd[1]: session-35.scope: Consumed 1min 8.980s CPU time.
Jan 26 12:46:15 np0005596060 systemd-logind[786]: Session 35 logged out. Waiting for processes to exit.
Jan 26 12:46:15 np0005596060 systemd-logind[786]: Removed session 35.
Jan 26 12:46:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:16.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:16.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:18.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:18 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 26 12:46:18 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 26 12:46:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:18.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:20.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:20 np0005596060 systemd-logind[786]: New session 36 of user zuul.
Jan 26 12:46:20 np0005596060 systemd[1]: Started Session 36 of User zuul.
Jan 26 12:46:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:20.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:21 np0005596060 python3.9[108356]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:46:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:46:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:22.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:46:22 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 26 12:46:22 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 26 12:46:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:22 np0005596060 python3.9[108512]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 26 12:46:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:23 np0005596060 python3.9[108666]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:46:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:24.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:24.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:24 np0005596060 python3.9[108750]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 12:46:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:26.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:26 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 26 12:46:26 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 26 12:46:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:26.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:27 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 26 12:46:27 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 26 12:46:27 np0005596060 python3.9[108905]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:46:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:28.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:28 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 26 12:46:28 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 26 12:46:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:28.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:30.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:30 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 26 12:46:30 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 26 12:46:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:30.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:31 np0005596060 python3.9[109059]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 12:46:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:32.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:32 np0005596060 python3.9[109213]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:46:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:32.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:33 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 26 12:46:33 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 26 12:46:33 np0005596060 python3.9[109366]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 26 12:46:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:46:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:34.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:46:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:34.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:34 np0005596060 python3.9[109516]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:46:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:36.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:36 np0005596060 python3.9[109725]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:46:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:46:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:36.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:46:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:37 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 26 12:46:37 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 26 12:46:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:38.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:38.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:38 np0005596060 python3.9[109879]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:46:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:46:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:40.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:46:40 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 26 12:46:40 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 26 12:46:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:46:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:40.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:46:40 np0005596060 python3.9[110168]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 26 12:46:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:42 np0005596060 python3.9[110319]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:46:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:42.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:42 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.1b deep-scrub starts
Jan 26 12:46:42 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.1b deep-scrub ok
Jan 26 12:46:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:42.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:42 np0005596060 python3.9[110473]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:46:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:43 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 26 12:46:43 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 26 12:46:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:46:43
Jan 26 12:46:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:46:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:46:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'default.rgw.control', 'vms', 'volumes', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 26 12:46:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:46:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:44.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:46:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:44.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:45 np0005596060 python3.9[110628]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:46:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:46.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:46.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:47 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 26 12:46:47 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 26 12:46:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:48.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:48 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 26 12:46:48 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 26 12:46:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:48.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:48 np0005596060 python3.9[110782]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:46:49 np0005596060 python3.9[110937]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 26 12:46:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:50.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:50 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 26 12:46:50 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 26 12:46:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:50.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:51 np0005596060 systemd[1]: session-36.scope: Deactivated successfully.
Jan 26 12:46:51 np0005596060 systemd-logind[786]: Session 36 logged out. Waiting for processes to exit.
Jan 26 12:46:51 np0005596060 systemd[1]: session-36.scope: Consumed 19.964s CPU time.
Jan 26 12:46:51 np0005596060 systemd-logind[786]: Removed session 36.
Jan 26 12:46:51 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 26 12:46:51 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 26 12:46:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:52.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:52.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:54.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:54 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Jan 26 12:46:54 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Jan 26 12:46:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:54.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:46:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:46:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:56.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:46:56 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 26 12:46:56 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 26 12:46:56 np0005596060 systemd-logind[786]: New session 37 of user zuul.
Jan 26 12:46:56 np0005596060 systemd[1]: Started Session 37 of User zuul.
Jan 26 12:46:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:56.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:46:57 np0005596060 python3.9[111169]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:46:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:46:58.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:58 np0005596060 python3.9[111323]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:46:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:46:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:46:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:46:58.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:46:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:00 np0005596060 python3.9[111517]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:47:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:00.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:00 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.f deep-scrub starts
Jan 26 12:47:00 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 6.f deep-scrub ok
Jan 26 12:47:00 np0005596060 systemd[1]: session-37.scope: Deactivated successfully.
Jan 26 12:47:00 np0005596060 systemd[1]: session-37.scope: Consumed 2.562s CPU time.
Jan 26 12:47:00 np0005596060 systemd-logind[786]: Session 37 logged out. Waiting for processes to exit.
Jan 26 12:47:00 np0005596060 systemd-logind[786]: Removed session 37.
Jan 26 12:47:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:00.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:01 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.19 deep-scrub starts
Jan 26 12:47:01 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.19 deep-scrub ok
Jan 26 12:47:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:02.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:47:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:47:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:04.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:04.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:06.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:06 np0005596060 systemd-logind[786]: New session 38 of user zuul.
Jan 26 12:47:06 np0005596060 systemd[1]: Started Session 38 of User zuul.
Jan 26 12:47:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:06.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:07 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1a deep-scrub starts
Jan 26 12:47:07 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1a deep-scrub ok
Jan 26 12:47:07 np0005596060 python3.9[111700]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:47:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:08.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:08 np0005596060 python3.9[111854]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:47:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:08.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:09 np0005596060 python3.9[112142]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:47:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:10.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:47:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:47:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:47:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:47:10 np0005596060 python3.9[112226]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:47:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:10.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:47:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e484cf77-d4c0-4505-a56d-5cb5e6373adf does not exist
Jan 26 12:47:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev fd51c286-b290-475e-bc50-6b16005d6144 does not exist
Jan 26 12:47:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 38db691f-2ba9-4b40-96ed-e51624241744 does not exist
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:47:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:47:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:12.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:12 np0005596060 podman[112368]: 2026-01-26 17:47:12.049613349 +0000 UTC m=+0.024278802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:47:12 np0005596060 podman[112368]: 2026-01-26 17:47:12.181767474 +0000 UTC m=+0.156432907 container create abb3e85c8c3ac4f9f3c288b129112d89f518cc32c3ff9b73c9d108d80c3c5fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:47:12 np0005596060 systemd[1]: Started libpod-conmon-abb3e85c8c3ac4f9f3c288b129112d89f518cc32c3ff9b73c9d108d80c3c5fb4.scope.
Jan 26 12:47:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:47:12 np0005596060 podman[112368]: 2026-01-26 17:47:12.288029007 +0000 UTC m=+0.262694470 container init abb3e85c8c3ac4f9f3c288b129112d89f518cc32c3ff9b73c9d108d80c3c5fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:47:12 np0005596060 podman[112368]: 2026-01-26 17:47:12.302640455 +0000 UTC m=+0.277305888 container start abb3e85c8c3ac4f9f3c288b129112d89f518cc32c3ff9b73c9d108d80c3c5fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:47:12 np0005596060 podman[112368]: 2026-01-26 17:47:12.307361514 +0000 UTC m=+0.282026947 container attach abb3e85c8c3ac4f9f3c288b129112d89f518cc32c3ff9b73c9d108d80c3c5fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:47:12 np0005596060 clever_williamson[112408]: 167 167
Jan 26 12:47:12 np0005596060 systemd[1]: libpod-abb3e85c8c3ac4f9f3c288b129112d89f518cc32c3ff9b73c9d108d80c3c5fb4.scope: Deactivated successfully.
Jan 26 12:47:12 np0005596060 conmon[112408]: conmon abb3e85c8c3ac4f9f3c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abb3e85c8c3ac4f9f3c288b129112d89f518cc32c3ff9b73c9d108d80c3c5fb4.scope/container/memory.events
Jan 26 12:47:12 np0005596060 podman[112368]: 2026-01-26 17:47:12.312095473 +0000 UTC m=+0.286760896 container died abb3e85c8c3ac4f9f3c288b129112d89f518cc32c3ff9b73c9d108d80c3c5fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 26 12:47:12 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8e5fff8500169ba5e7a20bf58485ab77ec8de828de7b70cd4794643373851b79-merged.mount: Deactivated successfully.
Jan 26 12:47:12 np0005596060 podman[112368]: 2026-01-26 17:47:12.362933112 +0000 UTC m=+0.337598545 container remove abb3e85c8c3ac4f9f3c288b129112d89f518cc32c3ff9b73c9d108d80c3c5fb4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 12:47:12 np0005596060 systemd[1]: libpod-conmon-abb3e85c8c3ac4f9f3c288b129112d89f518cc32c3ff9b73c9d108d80c3c5fb4.scope: Deactivated successfully.
Jan 26 12:47:12 np0005596060 podman[112491]: 2026-01-26 17:47:12.665302599 +0000 UTC m=+0.080191829 container create 7c16225a6647a6b28bd5e286ce168d3adcf14efc4892687225c2264da0189530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 12:47:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:12.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:12 np0005596060 podman[112491]: 2026-01-26 17:47:12.628033221 +0000 UTC m=+0.042922511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:47:12 np0005596060 systemd[1]: Started libpod-conmon-7c16225a6647a6b28bd5e286ce168d3adcf14efc4892687225c2264da0189530.scope.
Jan 26 12:47:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:47:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb30ff7d5170090039d671be5c87aeaa35cf3801ada0af44ac4fed3e2fe18cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb30ff7d5170090039d671be5c87aeaa35cf3801ada0af44ac4fed3e2fe18cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb30ff7d5170090039d671be5c87aeaa35cf3801ada0af44ac4fed3e2fe18cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb30ff7d5170090039d671be5c87aeaa35cf3801ada0af44ac4fed3e2fe18cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb30ff7d5170090039d671be5c87aeaa35cf3801ada0af44ac4fed3e2fe18cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:12 np0005596060 podman[112491]: 2026-01-26 17:47:12.905020029 +0000 UTC m=+0.319909259 container init 7c16225a6647a6b28bd5e286ce168d3adcf14efc4892687225c2264da0189530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 26 12:47:12 np0005596060 podman[112491]: 2026-01-26 17:47:12.912861996 +0000 UTC m=+0.327751206 container start 7c16225a6647a6b28bd5e286ce168d3adcf14efc4892687225c2264da0189530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 12:47:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:12 np0005596060 podman[112491]: 2026-01-26 17:47:12.984428736 +0000 UTC m=+0.399317936 container attach 7c16225a6647a6b28bd5e286ce168d3adcf14efc4892687225c2264da0189530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 12:47:13 np0005596060 python3.9[112578]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:47:13 np0005596060 great_morse[112563]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:47:13 np0005596060 great_morse[112563]: --> relative data size: 1.0
Jan 26 12:47:13 np0005596060 great_morse[112563]: --> All data devices are unavailable
Jan 26 12:47:13 np0005596060 systemd[1]: libpod-7c16225a6647a6b28bd5e286ce168d3adcf14efc4892687225c2264da0189530.scope: Deactivated successfully.
Jan 26 12:47:13 np0005596060 conmon[112563]: conmon 7c16225a6647a6b28bd5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c16225a6647a6b28bd5e286ce168d3adcf14efc4892687225c2264da0189530.scope/container/memory.events
Jan 26 12:47:13 np0005596060 podman[112491]: 2026-01-26 17:47:13.806848697 +0000 UTC m=+1.221737947 container died 7c16225a6647a6b28bd5e286ce168d3adcf14efc4892687225c2264da0189530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:47:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:14.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:14.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:14 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2cb30ff7d5170090039d671be5c87aeaa35cf3801ada0af44ac4fed3e2fe18cf-merged.mount: Deactivated successfully.
Jan 26 12:47:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:15 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Jan 26 12:47:15 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Jan 26 12:47:15 np0005596060 podman[112491]: 2026-01-26 17:47:15.542381561 +0000 UTC m=+2.957270791 container remove 7c16225a6647a6b28bd5e286ce168d3adcf14efc4892687225c2264da0189530 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_morse, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:47:15 np0005596060 systemd[1]: libpod-conmon-7c16225a6647a6b28bd5e286ce168d3adcf14efc4892687225c2264da0189530.scope: Deactivated successfully.
Jan 26 12:47:15 np0005596060 python3.9[112850]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:47:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:16 np0005596060 podman[113066]: 2026-01-26 17:47:16.191268546 +0000 UTC m=+0.038626073 container create d1a209134e2c2f18f5ee8281512c40c6e4d87ab9c8f9995ced53fced44e4eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 12:47:16 np0005596060 systemd[1]: Started libpod-conmon-d1a209134e2c2f18f5ee8281512c40c6e4d87ab9c8f9995ced53fced44e4eb3c.scope.
Jan 26 12:47:16 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:47:16 np0005596060 podman[113066]: 2026-01-26 17:47:16.173222462 +0000 UTC m=+0.020580019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:47:16 np0005596060 podman[113066]: 2026-01-26 17:47:16.31149302 +0000 UTC m=+0.158850577 container init d1a209134e2c2f18f5ee8281512c40c6e4d87ab9c8f9995ced53fced44e4eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 26 12:47:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:16 np0005596060 podman[113066]: 2026-01-26 17:47:16.320769164 +0000 UTC m=+0.168126691 container start d1a209134e2c2f18f5ee8281512c40c6e4d87ab9c8f9995ced53fced44e4eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ride, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:47:16 np0005596060 podman[113066]: 2026-01-26 17:47:16.325258446 +0000 UTC m=+0.172615973 container attach d1a209134e2c2f18f5ee8281512c40c6e4d87ab9c8f9995ced53fced44e4eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ride, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 12:47:16 np0005596060 musing_ride[113111]: 167 167
Jan 26 12:47:16 np0005596060 systemd[1]: libpod-d1a209134e2c2f18f5ee8281512c40c6e4d87ab9c8f9995ced53fced44e4eb3c.scope: Deactivated successfully.
Jan 26 12:47:16 np0005596060 podman[113066]: 2026-01-26 17:47:16.328689563 +0000 UTC m=+0.176047090 container died d1a209134e2c2f18f5ee8281512c40c6e4d87ab9c8f9995ced53fced44e4eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 12:47:16 np0005596060 systemd[1]: var-lib-containers-storage-overlay-348e65519788641d719e68be296cc05ad26c7ab1aa684cc45d3db70afde645b6-merged.mount: Deactivated successfully.
Jan 26 12:47:16 np0005596060 podman[113066]: 2026-01-26 17:47:16.374135425 +0000 UTC m=+0.221492952 container remove d1a209134e2c2f18f5ee8281512c40c6e4d87ab9c8f9995ced53fced44e4eb3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 12:47:16 np0005596060 systemd[1]: libpod-conmon-d1a209134e2c2f18f5ee8281512c40c6e4d87ab9c8f9995ced53fced44e4eb3c.scope: Deactivated successfully.
Jan 26 12:47:16 np0005596060 python3.9[113164]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:47:16 np0005596060 podman[113183]: 2026-01-26 17:47:16.591055202 +0000 UTC m=+0.085746968 container create 89a6ea4817b7af67ed8a971c8bfc31138a61c42edc6cd349291d4e1982a1ce94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 12:47:16 np0005596060 podman[113183]: 2026-01-26 17:47:16.533236968 +0000 UTC m=+0.027928764 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:47:16 np0005596060 systemd[1]: Started libpod-conmon-89a6ea4817b7af67ed8a971c8bfc31138a61c42edc6cd349291d4e1982a1ce94.scope.
Jan 26 12:47:16 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:47:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18591bbeb7f0b878fd450c15d0108c7d58ae4b434ff6a3599685c71c694c181/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18591bbeb7f0b878fd450c15d0108c7d58ae4b434ff6a3599685c71c694c181/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18591bbeb7f0b878fd450c15d0108c7d58ae4b434ff6a3599685c71c694c181/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f18591bbeb7f0b878fd450c15d0108c7d58ae4b434ff6a3599685c71c694c181/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:16 np0005596060 podman[113183]: 2026-01-26 17:47:16.709099322 +0000 UTC m=+0.203791108 container init 89a6ea4817b7af67ed8a971c8bfc31138a61c42edc6cd349291d4e1982a1ce94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:47:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:16.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:16 np0005596060 podman[113183]: 2026-01-26 17:47:16.71733775 +0000 UTC m=+0.212029516 container start 89a6ea4817b7af67ed8a971c8bfc31138a61c42edc6cd349291d4e1982a1ce94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:47:16 np0005596060 podman[113183]: 2026-01-26 17:47:16.72092816 +0000 UTC m=+0.215619926 container attach 89a6ea4817b7af67ed8a971c8bfc31138a61c42edc6cd349291d4e1982a1ce94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:47:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:17 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 26 12:47:17 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]: {
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:    "1": [
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:        {
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "devices": [
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "/dev/loop3"
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            ],
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "lv_name": "ceph_lv0",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "lv_size": "7511998464",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "name": "ceph_lv0",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "tags": {
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.cluster_name": "ceph",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.crush_device_class": "",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.encrypted": "0",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.osd_id": "1",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.type": "block",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:                "ceph.vdo": "0"
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            },
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "type": "block",
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:            "vg_name": "ceph_vg0"
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:        }
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]:    ]
Jan 26 12:47:17 np0005596060 lucid_hawking[113211]: }
Jan 26 12:47:17 np0005596060 systemd[1]: libpod-89a6ea4817b7af67ed8a971c8bfc31138a61c42edc6cd349291d4e1982a1ce94.scope: Deactivated successfully.
Jan 26 12:47:17 np0005596060 podman[113183]: 2026-01-26 17:47:17.570406021 +0000 UTC m=+1.065097797 container died 89a6ea4817b7af67ed8a971c8bfc31138a61c42edc6cd349291d4e1982a1ce94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 12:47:17 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f18591bbeb7f0b878fd450c15d0108c7d58ae4b434ff6a3599685c71c694c181-merged.mount: Deactivated successfully.
Jan 26 12:47:17 np0005596060 podman[113183]: 2026-01-26 17:47:17.666470888 +0000 UTC m=+1.161162694 container remove 89a6ea4817b7af67ed8a971c8bfc31138a61c42edc6cd349291d4e1982a1ce94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:47:17 np0005596060 python3.9[113370]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:47:17 np0005596060 systemd[1]: libpod-conmon-89a6ea4817b7af67ed8a971c8bfc31138a61c42edc6cd349291d4e1982a1ce94.scope: Deactivated successfully.
Jan 26 12:47:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:18 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 26 12:47:18 np0005596060 ceph-osd[84834]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 26 12:47:18 np0005596060 python3.9[113540]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:47:18 np0005596060 podman[113628]: 2026-01-26 17:47:18.361667518 +0000 UTC m=+0.047919366 container create 82603d8fe299aeca040aafdaaff0e56f546a299e0b1142c2765613933cea8140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:47:18 np0005596060 systemd[1]: Started libpod-conmon-82603d8fe299aeca040aafdaaff0e56f546a299e0b1142c2765613933cea8140.scope.
Jan 26 12:47:18 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:47:18 np0005596060 podman[113628]: 2026-01-26 17:47:18.341403138 +0000 UTC m=+0.027654996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:47:18 np0005596060 podman[113628]: 2026-01-26 17:47:18.45196368 +0000 UTC m=+0.138215548 container init 82603d8fe299aeca040aafdaaff0e56f546a299e0b1142c2765613933cea8140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 12:47:18 np0005596060 podman[113628]: 2026-01-26 17:47:18.460245038 +0000 UTC m=+0.146496876 container start 82603d8fe299aeca040aafdaaff0e56f546a299e0b1142c2765613933cea8140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mayer, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:47:18 np0005596060 podman[113628]: 2026-01-26 17:47:18.463957932 +0000 UTC m=+0.150209890 container attach 82603d8fe299aeca040aafdaaff0e56f546a299e0b1142c2765613933cea8140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mayer, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:47:18 np0005596060 amazing_mayer[113687]: 167 167
Jan 26 12:47:18 np0005596060 systemd[1]: libpod-82603d8fe299aeca040aafdaaff0e56f546a299e0b1142c2765613933cea8140.scope: Deactivated successfully.
Jan 26 12:47:18 np0005596060 podman[113628]: 2026-01-26 17:47:18.466566487 +0000 UTC m=+0.152818325 container died 82603d8fe299aeca040aafdaaff0e56f546a299e0b1142c2765613933cea8140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mayer, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:47:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b1baade84e010210084c24f9b5513337529d98447365bb377ba56b787dd06c28-merged.mount: Deactivated successfully.
Jan 26 12:47:18 np0005596060 podman[113628]: 2026-01-26 17:47:18.505340663 +0000 UTC m=+0.191592501 container remove 82603d8fe299aeca040aafdaaff0e56f546a299e0b1142c2765613933cea8140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:47:18 np0005596060 systemd[1]: libpod-conmon-82603d8fe299aeca040aafdaaff0e56f546a299e0b1142c2765613933cea8140.scope: Deactivated successfully.
Jan 26 12:47:18 np0005596060 podman[113769]: 2026-01-26 17:47:18.694361368 +0000 UTC m=+0.044825389 container create fa48dfebff1e686b1bf8989d36dd03ccb1b53bf55c013bb6ae0936f6b5c80810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:47:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:18.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:18 np0005596060 systemd[1]: Started libpod-conmon-fa48dfebff1e686b1bf8989d36dd03ccb1b53bf55c013bb6ae0936f6b5c80810.scope.
Jan 26 12:47:18 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:47:18 np0005596060 podman[113769]: 2026-01-26 17:47:18.672775955 +0000 UTC m=+0.023239996 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:47:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df749033eff95223cfd857b7d28cfaef201ee9b67fdb8ef0cea451252a55f13/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df749033eff95223cfd857b7d28cfaef201ee9b67fdb8ef0cea451252a55f13/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df749033eff95223cfd857b7d28cfaef201ee9b67fdb8ef0cea451252a55f13/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8df749033eff95223cfd857b7d28cfaef201ee9b67fdb8ef0cea451252a55f13/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:47:18 np0005596060 podman[113769]: 2026-01-26 17:47:18.795211645 +0000 UTC m=+0.145675716 container init fa48dfebff1e686b1bf8989d36dd03ccb1b53bf55c013bb6ae0936f6b5c80810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:47:18 np0005596060 podman[113769]: 2026-01-26 17:47:18.805932675 +0000 UTC m=+0.156396696 container start fa48dfebff1e686b1bf8989d36dd03ccb1b53bf55c013bb6ae0936f6b5c80810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keller, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 12:47:18 np0005596060 podman[113769]: 2026-01-26 17:47:18.809736251 +0000 UTC m=+0.160200292 container attach fa48dfebff1e686b1bf8989d36dd03ccb1b53bf55c013bb6ae0936f6b5c80810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keller, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 12:47:18 np0005596060 python3.9[113807]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:47:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:19 np0005596060 python3.9[113894]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:47:19 np0005596060 jolly_keller[113811]: {
Jan 26 12:47:19 np0005596060 jolly_keller[113811]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:47:19 np0005596060 jolly_keller[113811]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:47:19 np0005596060 jolly_keller[113811]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:47:19 np0005596060 jolly_keller[113811]:        "osd_id": 1,
Jan 26 12:47:19 np0005596060 jolly_keller[113811]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:47:19 np0005596060 jolly_keller[113811]:        "type": "bluestore"
Jan 26 12:47:19 np0005596060 jolly_keller[113811]:    }
Jan 26 12:47:19 np0005596060 jolly_keller[113811]: }
Jan 26 12:47:19 np0005596060 systemd[1]: libpod-fa48dfebff1e686b1bf8989d36dd03ccb1b53bf55c013bb6ae0936f6b5c80810.scope: Deactivated successfully.
Jan 26 12:47:19 np0005596060 podman[113769]: 2026-01-26 17:47:19.723835588 +0000 UTC m=+1.074299609 container died fa48dfebff1e686b1bf8989d36dd03ccb1b53bf55c013bb6ae0936f6b5c80810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 12:47:19 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8df749033eff95223cfd857b7d28cfaef201ee9b67fdb8ef0cea451252a55f13-merged.mount: Deactivated successfully.
Jan 26 12:47:19 np0005596060 podman[113769]: 2026-01-26 17:47:19.797195434 +0000 UTC m=+1.147659455 container remove fa48dfebff1e686b1bf8989d36dd03ccb1b53bf55c013bb6ae0936f6b5c80810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 12:47:19 np0005596060 systemd[1]: libpod-conmon-fa48dfebff1e686b1bf8989d36dd03ccb1b53bf55c013bb6ae0936f6b5c80810.scope: Deactivated successfully.
Jan 26 12:47:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:47:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:47:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:47:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:47:19 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6872ef6a-84b2-4682-9cce-def42a1ff7cb does not exist
Jan 26 12:47:19 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b6054881-fa3e-4d23-9ef2-a0d480dc1460 does not exist
Jan 26 12:47:19 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 36357e6b-3f82-4102-a93c-2eb9205c555a does not exist
Jan 26 12:47:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:47:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:47:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:20.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:20 np0005596060 python3.9[114124]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:47:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:20.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:21 np0005596060 python3.9[114276]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:47:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:21 np0005596060 python3.9[114429]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:47:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:22.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:22 np0005596060 python3.9[114581]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:47:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:22.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:23 np0005596060 python3.9[114734]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:47:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:24.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:24.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:25 np0005596060 python3.9[114888]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:47:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:26.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:26 np0005596060 python3.9[115042]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:47:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:26.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:27 np0005596060 python3.9[115195]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:47:27 np0005596060 python3.9[115347]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:47:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:28.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:28.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:29 np0005596060 python3.9[115500]: ansible-service_facts Invoked
Jan 26 12:47:29 np0005596060 network[115518]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 12:47:29 np0005596060 network[115519]: 'network-scripts' will be removed from distribution in near future.
Jan 26 12:47:29 np0005596060 network[115520]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 12:47:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:30.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:30.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:32.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:34.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:34.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:35 np0005596060 python3.9[115975]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:47:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:36.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:36.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:38.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:38 np0005596060 python3.9[116179]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 26 12:47:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:38.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:40.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:40 np0005596060 python3.9[116332]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:47:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:40.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:41 np0005596060 python3.9[116410]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:47:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:41 np0005596060 python3.9[116563]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:47:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:42 np0005596060 python3.9[116641]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:47:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:42.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:47:43
Jan 26 12:47:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:47:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:47:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['vms', 'images', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control']
Jan 26 12:47:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:47:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:44.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:47:44 np0005596060 python3.9[116794]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:47:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:44.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:46.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:46 np0005596060 python3.9[116947]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:47:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:46.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:47 np0005596060 python3.9[117032]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:47:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:48.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:48 np0005596060 systemd[1]: session-38.scope: Deactivated successfully.
Jan 26 12:47:48 np0005596060 systemd[1]: session-38.scope: Consumed 25.482s CPU time.
Jan 26 12:47:48 np0005596060 systemd-logind[786]: Session 38 logged out. Waiting for processes to exit.
Jan 26 12:47:48 np0005596060 systemd-logind[786]: Removed session 38.
Jan 26 12:47:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:48.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:47:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:50.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:47:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:50.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:52.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:52.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:47:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:54.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:47:54 np0005596060 systemd-logind[786]: New session 39 of user zuul.
Jan 26 12:47:54 np0005596060 systemd[1]: Started Session 39 of User zuul.
Jan 26 12:47:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:54.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:55 np0005596060 python3.9[117217]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:47:55 np0005596060 python3.9[117420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:47:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:56.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:47:56 np0005596060 python3.9[117498]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:47:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:47:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:56.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:47:56 np0005596060 systemd[1]: session-39.scope: Deactivated successfully.
Jan 26 12:47:56 np0005596060 systemd[1]: session-39.scope: Consumed 1.653s CPU time.
Jan 26 12:47:56 np0005596060 systemd-logind[786]: Session 39 logged out. Waiting for processes to exit.
Jan 26 12:47:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:47:56 np0005596060 systemd-logind[786]: Removed session 39.
Jan 26 12:47:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:47:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:47:58.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.302672) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449678302766, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2380, "num_deletes": 251, "total_data_size": 3821021, "memory_usage": 3892496, "flush_reason": "Manual Compaction"}
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449678327666, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3729791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7624, "largest_seqno": 10003, "table_properties": {"data_size": 3719440, "index_size": 6269, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25453, "raw_average_key_size": 21, "raw_value_size": 3696826, "raw_average_value_size": 3098, "num_data_blocks": 278, "num_entries": 1193, "num_filter_entries": 1193, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449490, "oldest_key_time": 1769449490, "file_creation_time": 1769449678, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 25051 microseconds, and 10075 cpu microseconds.
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.327730) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3729791 bytes OK
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.327753) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.329609) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.329624) EVENT_LOG_v1 {"time_micros": 1769449678329618, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.329643) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3810758, prev total WAL file size 3810758, number of live WAL files 2.
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.330825) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3642KB)], [20(7669KB)]
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449678330930, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 11583472, "oldest_snapshot_seqno": -1}
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3858 keys, 9993696 bytes, temperature: kUnknown
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449678454546, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9993696, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9961599, "index_size": 21363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 93132, "raw_average_key_size": 24, "raw_value_size": 9885752, "raw_average_value_size": 2562, "num_data_blocks": 933, "num_entries": 3858, "num_filter_entries": 3858, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769449678, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.456420) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9993696 bytes
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.458792) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.6 rd, 80.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.5 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(5.8) write-amplify(2.7) OK, records in: 4381, records dropped: 523 output_compression: NoCompression
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.458841) EVENT_LOG_v1 {"time_micros": 1769449678458822, "job": 6, "event": "compaction_finished", "compaction_time_micros": 123729, "compaction_time_cpu_micros": 27150, "output_level": 6, "num_output_files": 1, "total_output_size": 9993696, "num_input_records": 4381, "num_output_records": 3858, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449678459847, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449678461888, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.330696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.462048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.462058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.462061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.462063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:47:58 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:47:58.462065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:47:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:47:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:47:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:47:58.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:47:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:00.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:00.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:02.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:02 np0005596060 systemd-logind[786]: New session 40 of user zuul.
Jan 26 12:48:02 np0005596060 systemd[1]: Started Session 40 of User zuul.
Jan 26 12:48:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:02.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:48:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:48:03 np0005596060 python3.9[117680]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:48:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:04.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:04 np0005596060 python3.9[117836]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:04.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:05 np0005596060 python3.9[118012]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:05 np0005596060 python3.9[118090]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.3uz0p4uj recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:06.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:06.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:07 np0005596060 python3.9[118242]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:07 np0005596060 python3.9[118321]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.wnyw4d5n recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:08.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:08 np0005596060 python3.9[118473]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:48:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:08.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:09 np0005596060 python3.9[118625]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:09 np0005596060 python3.9[118704]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:48:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:10.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:10 np0005596060 python3.9[118856]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:10 np0005596060 python3.9[118934]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:48:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:10.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:12.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:12 np0005596060 python3.9[119087]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:12.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:13 np0005596060 python3.9[119240]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:13 np0005596060 python3.9[119318]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:48:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:14.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:14 np0005596060 python3.9[119470]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:14.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:14 np0005596060 python3.9[119548]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:16 np0005596060 python3.9[119701]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:48:16 np0005596060 systemd[1]: Reloading.
Jan 26 12:48:16 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:48:16 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:48:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:16.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:16.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:17 np0005596060 python3.9[119940]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:17 np0005596060 python3.9[120019]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:18.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:18 np0005596060 python3.9[120171]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:18 np0005596060 python3.9[120249]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:18.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:19 np0005596060 python3.9[120402]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:48:19 np0005596060 systemd[1]: Reloading.
Jan 26 12:48:19 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:48:19 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:48:19 np0005596060 systemd[1]: Starting Create netns directory...
Jan 26 12:48:19 np0005596060 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 12:48:19 np0005596060 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 12:48:19 np0005596060 systemd[1]: Finished Create netns directory.
Jan 26 12:48:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:20.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:20.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:20 np0005596060 python3.9[120693]: ansible-ansible.builtin.service_facts Invoked
Jan 26 12:48:20 np0005596060 network[120725]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 12:48:20 np0005596060 network[120726]: 'network-scripts' will be removed from distribution in near future.
Jan 26 12:48:20 np0005596060 network[120727]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 12:48:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:22.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:22.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:48:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:48:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:48:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:48:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:24.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:48:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e838dbbe-788d-47e9-a2eb-c9fb1829985a does not exist
Jan 26 12:48:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 910cea57-818e-40aa-ad03-c49538117cbd does not exist
Jan 26 12:48:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 991000ca-110c-46e8-aa05-1ba4ccde9f05 does not exist
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:48:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:48:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:24.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:24 np0005596060 podman[120949]: 2026-01-26 17:48:24.870452658 +0000 UTC m=+0.050632243 container create d2935a96010649b2e91030154f8392ea0c8b8002f06cd9fa14be817731a0335a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 12:48:24 np0005596060 systemd[1]: Started libpod-conmon-d2935a96010649b2e91030154f8392ea0c8b8002f06cd9fa14be817731a0335a.scope.
Jan 26 12:48:24 np0005596060 podman[120949]: 2026-01-26 17:48:24.849621811 +0000 UTC m=+0.029801426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:48:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:48:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:25 np0005596060 podman[120949]: 2026-01-26 17:48:25.119937265 +0000 UTC m=+0.300116880 container init d2935a96010649b2e91030154f8392ea0c8b8002f06cd9fa14be817731a0335a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 12:48:25 np0005596060 podman[120949]: 2026-01-26 17:48:25.128098192 +0000 UTC m=+0.308277787 container start d2935a96010649b2e91030154f8392ea0c8b8002f06cd9fa14be817731a0335a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:48:25 np0005596060 stoic_blackwell[120970]: 167 167
Jan 26 12:48:25 np0005596060 systemd[1]: libpod-d2935a96010649b2e91030154f8392ea0c8b8002f06cd9fa14be817731a0335a.scope: Deactivated successfully.
Jan 26 12:48:25 np0005596060 conmon[120970]: conmon d2935a96010649b2e910 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2935a96010649b2e91030154f8392ea0c8b8002f06cd9fa14be817731a0335a.scope/container/memory.events
Jan 26 12:48:25 np0005596060 podman[120949]: 2026-01-26 17:48:25.165658293 +0000 UTC m=+0.345837888 container attach d2935a96010649b2e91030154f8392ea0c8b8002f06cd9fa14be817731a0335a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 12:48:25 np0005596060 podman[120949]: 2026-01-26 17:48:25.166242158 +0000 UTC m=+0.346421773 container died d2935a96010649b2e91030154f8392ea0c8b8002f06cd9fa14be817731a0335a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:48:25 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d5e49b2ed9efdd6f311f4d32c4c5a3edc24bb2d911c28bed58e451240cf1be3a-merged.mount: Deactivated successfully.
Jan 26 12:48:25 np0005596060 podman[120949]: 2026-01-26 17:48:25.286585285 +0000 UTC m=+0.466764910 container remove d2935a96010649b2e91030154f8392ea0c8b8002f06cd9fa14be817731a0335a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:48:25 np0005596060 systemd[1]: libpod-conmon-d2935a96010649b2e91030154f8392ea0c8b8002f06cd9fa14be817731a0335a.scope: Deactivated successfully.
Jan 26 12:48:25 np0005596060 podman[121022]: 2026-01-26 17:48:25.430810567 +0000 UTC m=+0.030484983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:48:25 np0005596060 podman[121022]: 2026-01-26 17:48:25.544105495 +0000 UTC m=+0.143779931 container create d72aeb1e5ef1c4e82d163e14ef0b551b712f27cbfb45dc84c4ae46622a7f69a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 12:48:25 np0005596060 systemd[1]: Started libpod-conmon-d72aeb1e5ef1c4e82d163e14ef0b551b712f27cbfb45dc84c4ae46622a7f69a3.scope.
Jan 26 12:48:25 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:48:25 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7197ef67532b33876e13a0ea99a5b4d0152cfd906a89bbcc8e5a23369662da83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:25 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7197ef67532b33876e13a0ea99a5b4d0152cfd906a89bbcc8e5a23369662da83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:25 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7197ef67532b33876e13a0ea99a5b4d0152cfd906a89bbcc8e5a23369662da83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:25 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7197ef67532b33876e13a0ea99a5b4d0152cfd906a89bbcc8e5a23369662da83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:25 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7197ef67532b33876e13a0ea99a5b4d0152cfd906a89bbcc8e5a23369662da83/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:25 np0005596060 podman[121022]: 2026-01-26 17:48:25.644850256 +0000 UTC m=+0.244524682 container init d72aeb1e5ef1c4e82d163e14ef0b551b712f27cbfb45dc84c4ae46622a7f69a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:48:25 np0005596060 podman[121022]: 2026-01-26 17:48:25.653189477 +0000 UTC m=+0.252863873 container start d72aeb1e5ef1c4e82d163e14ef0b551b712f27cbfb45dc84c4ae46622a7f69a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 26 12:48:25 np0005596060 podman[121022]: 2026-01-26 17:48:25.764803333 +0000 UTC m=+0.364477729 container attach d72aeb1e5ef1c4e82d163e14ef0b551b712f27cbfb45dc84c4ae46622a7f69a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 12:48:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:26.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:26 np0005596060 ecstatic_torvalds[121049]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:48:26 np0005596060 ecstatic_torvalds[121049]: --> relative data size: 1.0
Jan 26 12:48:26 np0005596060 ecstatic_torvalds[121049]: --> All data devices are unavailable
Jan 26 12:48:26 np0005596060 systemd[1]: libpod-d72aeb1e5ef1c4e82d163e14ef0b551b712f27cbfb45dc84c4ae46622a7f69a3.scope: Deactivated successfully.
Jan 26 12:48:26 np0005596060 podman[121022]: 2026-01-26 17:48:26.523497673 +0000 UTC m=+1.123172069 container died d72aeb1e5ef1c4e82d163e14ef0b551b712f27cbfb45dc84c4ae46622a7f69a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:48:26 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7197ef67532b33876e13a0ea99a5b4d0152cfd906a89bbcc8e5a23369662da83-merged.mount: Deactivated successfully.
Jan 26 12:48:26 np0005596060 podman[121022]: 2026-01-26 17:48:26.614319803 +0000 UTC m=+1.213994199 container remove d72aeb1e5ef1c4e82d163e14ef0b551b712f27cbfb45dc84c4ae46622a7f69a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 12:48:26 np0005596060 systemd[1]: libpod-conmon-d72aeb1e5ef1c4e82d163e14ef0b551b712f27cbfb45dc84c4ae46622a7f69a3.scope: Deactivated successfully.
Jan 26 12:48:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:27 np0005596060 python3.9[121333]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:27 np0005596060 podman[121377]: 2026-01-26 17:48:27.270829736 +0000 UTC m=+0.063047568 container create c01ad4b3309b60f749919730fec89d8fa402283c1dd9ed149906364afffd081e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:48:27 np0005596060 systemd[1]: Started libpod-conmon-c01ad4b3309b60f749919730fec89d8fa402283c1dd9ed149906364afffd081e.scope.
Jan 26 12:48:27 np0005596060 podman[121377]: 2026-01-26 17:48:27.240794325 +0000 UTC m=+0.033012217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:48:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:48:27 np0005596060 podman[121377]: 2026-01-26 17:48:27.362327832 +0000 UTC m=+0.154545674 container init c01ad4b3309b60f749919730fec89d8fa402283c1dd9ed149906364afffd081e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 12:48:27 np0005596060 podman[121377]: 2026-01-26 17:48:27.371753461 +0000 UTC m=+0.163971273 container start c01ad4b3309b60f749919730fec89d8fa402283c1dd9ed149906364afffd081e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 12:48:27 np0005596060 podman[121377]: 2026-01-26 17:48:27.375944987 +0000 UTC m=+0.168162799 container attach c01ad4b3309b60f749919730fec89d8fa402283c1dd9ed149906364afffd081e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 12:48:27 np0005596060 eager_wilson[121440]: 167 167
Jan 26 12:48:27 np0005596060 systemd[1]: libpod-c01ad4b3309b60f749919730fec89d8fa402283c1dd9ed149906364afffd081e.scope: Deactivated successfully.
Jan 26 12:48:27 np0005596060 conmon[121440]: conmon c01ad4b3309b60f74991 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c01ad4b3309b60f749919730fec89d8fa402283c1dd9ed149906364afffd081e.scope/container/memory.events
Jan 26 12:48:27 np0005596060 podman[121377]: 2026-01-26 17:48:27.379647681 +0000 UTC m=+0.171865483 container died c01ad4b3309b60f749919730fec89d8fa402283c1dd9ed149906364afffd081e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:48:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b992e467f8edf25ba5bb15c73e8b7ad44ff2158aa2eec496d4b4edc53b8a348a-merged.mount: Deactivated successfully.
Jan 26 12:48:27 np0005596060 podman[121377]: 2026-01-26 17:48:27.415487398 +0000 UTC m=+0.207705200 container remove c01ad4b3309b60f749919730fec89d8fa402283c1dd9ed149906364afffd081e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_wilson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:48:27 np0005596060 systemd[1]: libpod-conmon-c01ad4b3309b60f749919730fec89d8fa402283c1dd9ed149906364afffd081e.scope: Deactivated successfully.
Jan 26 12:48:27 np0005596060 podman[121495]: 2026-01-26 17:48:27.608397713 +0000 UTC m=+0.067309745 container create 18f52818d6da46e60f339e2c13ef70e129d9c57aa548f2b4a543075e9b898028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_margulis, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:48:27 np0005596060 python3.9[121489]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:27 np0005596060 podman[121495]: 2026-01-26 17:48:27.582072786 +0000 UTC m=+0.040984918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:48:27 np0005596060 systemd[1]: Started libpod-conmon-18f52818d6da46e60f339e2c13ef70e129d9c57aa548f2b4a543075e9b898028.scope.
Jan 26 12:48:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:48:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6745d2685a2a2bf1ce0ad663aaf028c98fa213dc6ecadd426057a4546ebe609f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6745d2685a2a2bf1ce0ad663aaf028c98fa213dc6ecadd426057a4546ebe609f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6745d2685a2a2bf1ce0ad663aaf028c98fa213dc6ecadd426057a4546ebe609f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6745d2685a2a2bf1ce0ad663aaf028c98fa213dc6ecadd426057a4546ebe609f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:27 np0005596060 podman[121495]: 2026-01-26 17:48:27.729025157 +0000 UTC m=+0.187937209 container init 18f52818d6da46e60f339e2c13ef70e129d9c57aa548f2b4a543075e9b898028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_margulis, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:48:27 np0005596060 podman[121495]: 2026-01-26 17:48:27.73860274 +0000 UTC m=+0.197514782 container start 18f52818d6da46e60f339e2c13ef70e129d9c57aa548f2b4a543075e9b898028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 12:48:27 np0005596060 podman[121495]: 2026-01-26 17:48:27.743063043 +0000 UTC m=+0.201975085 container attach 18f52818d6da46e60f339e2c13ef70e129d9c57aa548f2b4a543075e9b898028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 12:48:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:28.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:28 np0005596060 nice_margulis[121512]: {
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:    "1": [
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:        {
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "devices": [
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "/dev/loop3"
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            ],
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "lv_name": "ceph_lv0",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "lv_size": "7511998464",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "name": "ceph_lv0",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "tags": {
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.cluster_name": "ceph",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.crush_device_class": "",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.encrypted": "0",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.osd_id": "1",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.type": "block",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:                "ceph.vdo": "0"
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            },
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "type": "block",
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:            "vg_name": "ceph_vg0"
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:        }
Jan 26 12:48:28 np0005596060 nice_margulis[121512]:    ]
Jan 26 12:48:28 np0005596060 nice_margulis[121512]: }
Jan 26 12:48:28 np0005596060 systemd[1]: libpod-18f52818d6da46e60f339e2c13ef70e129d9c57aa548f2b4a543075e9b898028.scope: Deactivated successfully.
Jan 26 12:48:28 np0005596060 podman[121495]: 2026-01-26 17:48:28.491200424 +0000 UTC m=+0.950112466 container died 18f52818d6da46e60f339e2c13ef70e129d9c57aa548f2b4a543075e9b898028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_margulis, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:48:28 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6745d2685a2a2bf1ce0ad663aaf028c98fa213dc6ecadd426057a4546ebe609f-merged.mount: Deactivated successfully.
Jan 26 12:48:28 np0005596060 podman[121495]: 2026-01-26 17:48:28.57199814 +0000 UTC m=+1.030910182 container remove 18f52818d6da46e60f339e2c13ef70e129d9c57aa548f2b4a543075e9b898028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_margulis, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 12:48:28 np0005596060 systemd[1]: libpod-conmon-18f52818d6da46e60f339e2c13ef70e129d9c57aa548f2b4a543075e9b898028.scope: Deactivated successfully.
Jan 26 12:48:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:28.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:29 np0005596060 podman[121771]: 2026-01-26 17:48:29.263042677 +0000 UTC m=+0.039398698 container create 0c465542189bbfdb5425404653e8f51cd54e484f9f147dba9cf9d6aba152a321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:48:29 np0005596060 systemd[1]: Started libpod-conmon-0c465542189bbfdb5425404653e8f51cd54e484f9f147dba9cf9d6aba152a321.scope.
Jan 26 12:48:29 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:48:29 np0005596060 podman[121771]: 2026-01-26 17:48:29.247086243 +0000 UTC m=+0.023442294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:48:29 np0005596060 podman[121771]: 2026-01-26 17:48:29.35006186 +0000 UTC m=+0.126417911 container init 0c465542189bbfdb5425404653e8f51cd54e484f9f147dba9cf9d6aba152a321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:48:29 np0005596060 podman[121771]: 2026-01-26 17:48:29.360501035 +0000 UTC m=+0.136857066 container start 0c465542189bbfdb5425404653e8f51cd54e484f9f147dba9cf9d6aba152a321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 12:48:29 np0005596060 podman[121771]: 2026-01-26 17:48:29.364330182 +0000 UTC m=+0.140686213 container attach 0c465542189bbfdb5425404653e8f51cd54e484f9f147dba9cf9d6aba152a321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:48:29 np0005596060 nifty_goodall[121817]: 167 167
Jan 26 12:48:29 np0005596060 systemd[1]: libpod-0c465542189bbfdb5425404653e8f51cd54e484f9f147dba9cf9d6aba152a321.scope: Deactivated successfully.
Jan 26 12:48:29 np0005596060 podman[121771]: 2026-01-26 17:48:29.366522457 +0000 UTC m=+0.142878488 container died 0c465542189bbfdb5425404653e8f51cd54e484f9f147dba9cf9d6aba152a321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 12:48:29 np0005596060 systemd[1]: var-lib-containers-storage-overlay-44bdb08777e5d47c2395cbfab8f429b41826c599949cbff8ebcde7ebc7cd613e-merged.mount: Deactivated successfully.
Jan 26 12:48:29 np0005596060 podman[121771]: 2026-01-26 17:48:29.407349251 +0000 UTC m=+0.183705272 container remove 0c465542189bbfdb5425404653e8f51cd54e484f9f147dba9cf9d6aba152a321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_goodall, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 12:48:29 np0005596060 systemd[1]: libpod-conmon-0c465542189bbfdb5425404653e8f51cd54e484f9f147dba9cf9d6aba152a321.scope: Deactivated successfully.
Jan 26 12:48:29 np0005596060 python3.9[121845]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:29 np0005596060 podman[121863]: 2026-01-26 17:48:29.585261416 +0000 UTC m=+0.062793941 container create 90f2be193901876608a1331b3a51d8e3bc543aa0ca92770f2136482d2def5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:48:29 np0005596060 systemd[1]: Started libpod-conmon-90f2be193901876608a1331b3a51d8e3bc543aa0ca92770f2136482d2def5c2f.scope.
Jan 26 12:48:29 np0005596060 podman[121863]: 2026-01-26 17:48:29.553141852 +0000 UTC m=+0.030674487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:48:29 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:48:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7c503d9172a6c155a8f9b081643a96fd1405c1d3a48eecd5fc382ff45cad5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7c503d9172a6c155a8f9b081643a96fd1405c1d3a48eecd5fc382ff45cad5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7c503d9172a6c155a8f9b081643a96fd1405c1d3a48eecd5fc382ff45cad5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da7c503d9172a6c155a8f9b081643a96fd1405c1d3a48eecd5fc382ff45cad5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:48:29 np0005596060 podman[121863]: 2026-01-26 17:48:29.669860708 +0000 UTC m=+0.147393233 container init 90f2be193901876608a1331b3a51d8e3bc543aa0ca92770f2136482d2def5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 26 12:48:29 np0005596060 podman[121863]: 2026-01-26 17:48:29.678853875 +0000 UTC m=+0.156386400 container start 90f2be193901876608a1331b3a51d8e3bc543aa0ca92770f2136482d2def5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 12:48:29 np0005596060 podman[121863]: 2026-01-26 17:48:29.683164705 +0000 UTC m=+0.160697230 container attach 90f2be193901876608a1331b3a51d8e3bc543aa0ca92770f2136482d2def5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 26 12:48:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:30.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:30 np0005596060 python3.9[122036]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:30 np0005596060 wonderful_hawking[121883]: {
Jan 26 12:48:30 np0005596060 wonderful_hawking[121883]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:48:30 np0005596060 wonderful_hawking[121883]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:48:30 np0005596060 wonderful_hawking[121883]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:48:30 np0005596060 wonderful_hawking[121883]:        "osd_id": 1,
Jan 26 12:48:30 np0005596060 wonderful_hawking[121883]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:48:30 np0005596060 wonderful_hawking[121883]:        "type": "bluestore"
Jan 26 12:48:30 np0005596060 wonderful_hawking[121883]:    }
Jan 26 12:48:30 np0005596060 wonderful_hawking[121883]: }
Jan 26 12:48:30 np0005596060 systemd[1]: libpod-90f2be193901876608a1331b3a51d8e3bc543aa0ca92770f2136482d2def5c2f.scope: Deactivated successfully.
Jan 26 12:48:30 np0005596060 systemd[1]: libpod-90f2be193901876608a1331b3a51d8e3bc543aa0ca92770f2136482d2def5c2f.scope: Consumed 1.000s CPU time.
Jan 26 12:48:30 np0005596060 podman[121863]: 2026-01-26 17:48:30.681049731 +0000 UTC m=+1.158582306 container died 90f2be193901876608a1331b3a51d8e3bc543aa0ca92770f2136482d2def5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:48:30 np0005596060 systemd[1]: var-lib-containers-storage-overlay-da7c503d9172a6c155a8f9b081643a96fd1405c1d3a48eecd5fc382ff45cad5c-merged.mount: Deactivated successfully.
Jan 26 12:48:30 np0005596060 podman[121863]: 2026-01-26 17:48:30.760645366 +0000 UTC m=+1.238177891 container remove 90f2be193901876608a1331b3a51d8e3bc543aa0ca92770f2136482d2def5c2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hawking, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 12:48:30 np0005596060 python3.9[122122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:30 np0005596060 systemd[1]: libpod-conmon-90f2be193901876608a1331b3a51d8e3bc543aa0ca92770f2136482d2def5c2f.scope: Deactivated successfully.
Jan 26 12:48:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:48:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:48:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:48:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:30.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:48:30 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev c5b00d06-2c90-4637-a477-7af0395ed4b0 does not exist
Jan 26 12:48:30 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d41ab1bc-1580-457b-b84e-7b64f1b207ce does not exist
Jan 26 12:48:30 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d376df6b-2196-412b-a75b-72e336ac47de does not exist
Jan 26 12:48:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:48:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:48:32 np0005596060 python3.9[122346]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 26 12:48:32 np0005596060 systemd[1]: Starting Time & Date Service...
Jan 26 12:48:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:32.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:32 np0005596060 systemd[1]: Started Time & Date Service.
Jan 26 12:48:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:32.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:33 np0005596060 python3.9[122502]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:33 np0005596060 python3.9[122655]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:34.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:34 np0005596060 python3.9[122733]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:34.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:34 np0005596060 python3.9[122885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:35 np0005596060 python3.9[122964]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.gyootbes recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:36 np0005596060 python3.9[123124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:36.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:36 np0005596060 python3.9[123244]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:36.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:37 np0005596060 python3.9[123397]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:48:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:38.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:38 np0005596060 python3[123550]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 12:48:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:38.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:39 np0005596060 python3.9[123703]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:39 np0005596060 python3.9[123781]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:40.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:40 np0005596060 python3.9[123933]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:40.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:41 np0005596060 python3.9[124059]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449719.9073284-899-217194661125734/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.487229) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449721487318, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 605, "num_deletes": 250, "total_data_size": 789258, "memory_usage": 801392, "flush_reason": "Manual Compaction"}
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449721493730, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 552031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10004, "largest_seqno": 10608, "table_properties": {"data_size": 549077, "index_size": 926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7462, "raw_average_key_size": 19, "raw_value_size": 543001, "raw_average_value_size": 1440, "num_data_blocks": 40, "num_entries": 377, "num_filter_entries": 377, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449679, "oldest_key_time": 1769449679, "file_creation_time": 1769449721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 6554 microseconds, and 2803 cpu microseconds.
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.493784) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 552031 bytes OK
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.493806) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.495585) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.495599) EVENT_LOG_v1 {"time_micros": 1769449721495595, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.495615) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 786033, prev total WAL file size 786033, number of live WAL files 2.
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.496245) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(539KB)], [23(9759KB)]
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449721496346, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10545727, "oldest_snapshot_seqno": -1}
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3735 keys, 7811850 bytes, temperature: kUnknown
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449721552709, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7811850, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7783716, "index_size": 17720, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 91059, "raw_average_key_size": 24, "raw_value_size": 7713043, "raw_average_value_size": 2065, "num_data_blocks": 773, "num_entries": 3735, "num_filter_entries": 3735, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769449721, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.553451) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7811850 bytes
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.555083) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.8 rd, 138.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.5 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(33.3) write-amplify(14.2) OK, records in: 4235, records dropped: 500 output_compression: NoCompression
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.555116) EVENT_LOG_v1 {"time_micros": 1769449721555100, "job": 8, "event": "compaction_finished", "compaction_time_micros": 56449, "compaction_time_cpu_micros": 23321, "output_level": 6, "num_output_files": 1, "total_output_size": 7811850, "num_input_records": 4235, "num_output_records": 3735, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449721555779, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449721559916, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.496073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.559965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.559970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.559973) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.559975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:48:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:48:41.559976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:48:41 np0005596060 python3.9[124211]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:42.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:42 np0005596060 python3.9[124289]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:42.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:43 np0005596060 python3.9[124442]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:43 np0005596060 python3.9[124520]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:48:43
Jan 26 12:48:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:48:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:48:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['vms', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', '.mgr', 'volumes', 'default.rgw.meta']
Jan 26 12:48:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:48:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:44.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:48:44 np0005596060 python3.9[124672]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:48:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:44.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:45 np0005596060 python3.9[124750]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:46.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:46 np0005596060 python3.9[124903]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:48:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:46.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:47 np0005596060 python3.9[125059]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:48.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:48 np0005596060 python3.9[125211]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:48 np0005596060 systemd[1]: session-18.scope: Deactivated successfully.
Jan 26 12:48:48 np0005596060 systemd[1]: session-18.scope: Consumed 1min 22.466s CPU time.
Jan 26 12:48:48 np0005596060 systemd-logind[786]: Session 18 logged out. Waiting for processes to exit.
Jan 26 12:48:48 np0005596060 systemd-logind[786]: Removed session 18.
Jan 26 12:48:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:48.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:49 np0005596060 python3.9[125363]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:48:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:50.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:50 np0005596060 python3.9[125516]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 26 12:48:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:50.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:51 np0005596060 python3.9[125668]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 26 12:48:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:51 np0005596060 systemd[1]: session-40.scope: Deactivated successfully.
Jan 26 12:48:51 np0005596060 systemd[1]: session-40.scope: Consumed 32.183s CPU time.
Jan 26 12:48:51 np0005596060 systemd-logind[786]: Session 40 logged out. Waiting for processes to exit.
Jan 26 12:48:51 np0005596060 systemd-logind[786]: Removed session 40.
Jan 26 12:48:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:52.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:52.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:54.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 12:48:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2347 writes, 10K keys, 2347 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2347 writes, 2347 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2347 writes, 10K keys, 2347 commit groups, 1.0 writes per commit group, ingest: 13.74 MB, 0.02 MB/s#012Interval WAL: 2347 writes, 2347 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.9      0.84              0.04         4    0.209       0      0       0.0       0.0#012  L6      1/0    7.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1    119.7    102.3      0.24              0.07         3    0.080     12K   1315       0.0       0.0#012 Sum      1/0    7.45 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     26.6     33.6      1.07              0.11         7    0.154     12K   1315       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     26.7     33.6      1.07              0.11         6    0.179     12K   1315       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    119.7    102.3      0.24              0.07         3    0.080     12K   1315       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.83              0.04         3    0.278       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 1.1 seconds#012Interval compaction: 0.04 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5652937211f0#2 capacity: 304.00 MB usage: 1.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(63,1.09 MB,0.357718%) FilterBlock(8,41.61 KB,0.0133665%) IndexBlock(8,92.33 KB,0.0296593%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 26 12:48:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:54.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:56.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:48:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:48:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:56.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:48:57 np0005596060 systemd-logind[786]: New session 41 of user zuul.
Jan 26 12:48:57 np0005596060 systemd[1]: Started Session 41 of User zuul.
Jan 26 12:48:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:58 np0005596060 python3.9[125903]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 26 12:48:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:48:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:48:58.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:48:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:48:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:48:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:48:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:48:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:48:59 np0005596060 python3.9[126055]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:49:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:00.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:00 np0005596060 python3.9[126210]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 26 12:49:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:00.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:01 np0005596060 python3.9[126362]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.mhl6wqaz follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:01 np0005596060 python3.9[126488]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.mhl6wqaz mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769449740.661714-107-208616102604531/.source.mhl6wqaz _original_basename=.grm2mgow follow=False checksum=a595db097bfa207c8b20f83c8c918987b40a76ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:02.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:02 np0005596060 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 26 12:49:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:02.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:02 np0005596060 python3.9[126642]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:49:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:49:03 np0005596060 python3.9[126795]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8WJSSyps5/MOwluaYVKvHLbB3OOMaGha+S5zKQqPSAcedSyuyvzK3GC+qad2ZbcfCfiNZHWM+ylBueRDL14BxpBXCAqNKHN1Yo1Fvlb4JCkcbhbgkVGemDEsbBiNmTtSlxRI40uI8M0+E42b22Zh7qz1PC1XmS0po5y6SwzcfgbnZtuyVFsvGHqDWkkWV/gsjiZ57qMaC+DJaIhvfW+qObinKJqXeuPQbF6yjfhXPHf2nwYEGY9rM5zEvZyfC/Dnrg62lDFjq4LGLrb83ipcBQq+zMejeECDs/u6noWAMs8f5HcxW0zembv86K5pOtPJKA13xVImv+kfGS+EctaKEBB/ooqOhN9AdXFEJUuSDn/2iUm07NnrEN9WhrfiuxLCO/lBWwxFGKcQECRviuCwE51F4fVEduv4ZiDgPcsHo+fYbxXsG50xc8/Yumd+a60pkpu09wVk1P3fCbFbRd9kD4elm067blILF+Zs+YuWnuaK3LiCb+qzmDKQB4AArubE=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILePT0ow4c3ejDoUzP/5T/dIHfr1xTtwEP/2z/Lf68vz#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB0tLEQbxQsuF0gTFyU7HBbMRjNrt7rMl1+QXcK3yfs0Q29raINYHrTVwzWeSuTUiO464HBZr4aPyLzhd+2Z3xs=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0zaLI2LTbNOyYJLCkCHwBNvCbWxyjbFipOdeKx9WVOSI6BraalDHlRpumUYDm8JC8abEq1qaZCBLmxjPXdZu5OGr/kPmf6SKEUmhy4iVIlqya8lpE59ci/zJO3FmNG+BncaGfJAQ0wqUgfNc/27u/wxD+gMrd6Ocz1dRHjtV22N4KnHAZP+sb0G1LZUx4WhJ07B4r/YaWeXOL2puHk0zHfnxSMIyyEvTlx9zlqSArxDuyq6AA7skTmkIlIC7eYbws7R3oP5PdtDl0sj1SEaTS4uAOSxbcYCV3H/IBa5evA+pxo7m3gf2YQ/QsGcfMQF4GefF3pWfZN0BGK7DWb3bckv62Oq9geYx47ccajXIEt3vsncvsrZhozX5OPyxW4eLJ8r7ovCX+5uGTuF9LrmwDdc7XRJ7rXBWSKh66/yxUcPGEQIk7OoEA30ZmKeipyMJQHHrWKxAqkqz6+ZQ41KvXaFIB1lRQf4tlFTAfrm9xwChyoCfrU95QYM4V+zqCQ6E=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILbMiL3+EkWDKAQHi9JT5Xqvk8rNrdT5SVX2Gg2RyqsV#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLVPallz3Z+vrxzfd9Dxuo/G10ZpIDOna2ftaoWWaEiUQrn77C3vB8d1zHHnHxMi8qaS4W4lfA32FenhGfBnVVU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCOz38rnMu5RPID5R9a4AOkL2Ge6a4dzWxjmOZKIbuidITYge9lyZ+ThI161k8ZELWw9SBoQvNwVmySyCRLJH9qPhNCVmEqUqZJohUEZQ+lNpyZk3JkhZsgLTYjkdV/DPqp3iLlV/asPhl18j+CFKmN5Dx0qMsAg1f9CbOZwhdgeVEeB3IqdjBrPIMgAwVlacU9ty90SAUJj+RoMZePfAh7i2q7VTPHcvKRA1Mz4Q+RRKojI3DfR0se9vFL9KYNhD/O0JbAZksdom7tVuZ6LjcyIYqBUeB2jYwSO66sVFNWI4JwFEr5OOb1EiOGWGudWuZVfdeD+TYeZk0hco2GhtmXBVDWWeYQNNXAKRcQ7aM2y9SlN6gOKzJq08LuoShMOl8IuErTDV7Cp3WpuPPqDc5gv0swDVoOXsbju1Bxm2aLE7d1GiJbuhLS+pvIgc0MrnyOhUrTGTAdyfZ4gsw6BekK5Gf22C6xvZ865/N5LCr5jahKtqujZ6X6sECNsBQ1j0M=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOdmNmdvqfqzPDx4l6nvkEw8mwn78xc6LydRgAb6QEGT#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKb0RFR0G0BOVptSrXD3m/y/AD2q+whTWANps4FtvEcdq4zrHxHJM7JO/mkAyT4VEcyt7wmguNEWF5NqwEZeFZ4=#012 create=True mode=0644 path=/tmp/ansible.mhl6wqaz state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:04.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:04 np0005596060 python3.9[126947]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.mhl6wqaz' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:49:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:04.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:05 np0005596060 python3.9[127101]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.mhl6wqaz state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:05 np0005596060 systemd[1]: session-41.scope: Deactivated successfully.
Jan 26 12:49:05 np0005596060 systemd[1]: session-41.scope: Consumed 6.053s CPU time.
Jan 26 12:49:05 np0005596060 systemd-logind[786]: Session 41 logged out. Waiting for processes to exit.
Jan 26 12:49:05 np0005596060 systemd-logind[786]: Removed session 41.
Jan 26 12:49:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:06.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:06.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:08.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:08.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:10.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:10.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:11 np0005596060 systemd-logind[786]: New session 42 of user zuul.
Jan 26 12:49:11 np0005596060 systemd[1]: Started Session 42 of User zuul.
Jan 26 12:49:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:12.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:12 np0005596060 python3.9[127284]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:49:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:12.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:13 np0005596060 python3.9[127441]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 26 12:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:49:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:14.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:14 np0005596060 python3.9[127597]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:49:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:14.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:15 np0005596060 python3.9[127751]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:49:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:16.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:16 np0005596060 python3.9[127930]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:49:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:16.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:17 np0005596060 python3.9[128106]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:17 np0005596060 systemd[1]: session-42.scope: Deactivated successfully.
Jan 26 12:49:17 np0005596060 systemd[1]: session-42.scope: Consumed 4.347s CPU time.
Jan 26 12:49:17 np0005596060 systemd-logind[786]: Session 42 logged out. Waiting for processes to exit.
Jan 26 12:49:17 np0005596060 systemd-logind[786]: Removed session 42.
Jan 26 12:49:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:18.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:18.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:20.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:20.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:22.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:22.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:23 np0005596060 systemd-logind[786]: New session 43 of user zuul.
Jan 26 12:49:23 np0005596060 systemd[1]: Started Session 43 of User zuul.
Jan 26 12:49:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:24.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:24 np0005596060 python3.9[128288]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:49:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:24.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:25 np0005596060 python3.9[128445]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:49:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:26.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:26 np0005596060 python3.9[128529]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 26 12:49:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:26.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:28.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:28 np0005596060 python3.9[128681]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:49:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:28.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:30 np0005596060 python3.9[128833]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 12:49:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:30.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:30 np0005596060 python3.9[128983]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:49:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:31 np0005596060 python3.9[129212]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:49:32 np0005596060 systemd[1]: session-43.scope: Deactivated successfully.
Jan 26 12:49:32 np0005596060 systemd[1]: session-43.scope: Consumed 6.436s CPU time.
Jan 26 12:49:32 np0005596060 systemd-logind[786]: Session 43 logged out. Waiting for processes to exit.
Jan 26 12:49:32 np0005596060 systemd-logind[786]: Removed session 43.
Jan 26 12:49:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:32.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:32.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:49:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:49:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:49:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:49:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:34.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:49:34 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev a9fa0322-0e6c-4f0c-98fb-b3e7faf3e91d does not exist
Jan 26 12:49:34 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 745a5cb5-210c-4d23-a284-99ddeaf9e575 does not exist
Jan 26 12:49:34 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4449d53f-c5e1-4b2d-85ff-fa98c34616a9 does not exist
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:49:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:49:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:34.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:35 np0005596060 podman[129431]: 2026-01-26 17:49:35.114635634 +0000 UTC m=+0.049644484 container create dc1f4b996bfa61c49db004a7aa407a3e4ef305e695ea6366f30402c669761ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:49:35 np0005596060 systemd[1]: Started libpod-conmon-dc1f4b996bfa61c49db004a7aa407a3e4ef305e695ea6366f30402c669761ace.scope.
Jan 26 12:49:35 np0005596060 podman[129431]: 2026-01-26 17:49:35.090640464 +0000 UTC m=+0.025649324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:49:35 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:49:35 np0005596060 podman[129431]: 2026-01-26 17:49:35.207134117 +0000 UTC m=+0.142142977 container init dc1f4b996bfa61c49db004a7aa407a3e4ef305e695ea6366f30402c669761ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 12:49:35 np0005596060 podman[129431]: 2026-01-26 17:49:35.21509483 +0000 UTC m=+0.150103670 container start dc1f4b996bfa61c49db004a7aa407a3e4ef305e695ea6366f30402c669761ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 12:49:35 np0005596060 podman[129431]: 2026-01-26 17:49:35.218443745 +0000 UTC m=+0.153452605 container attach dc1f4b996bfa61c49db004a7aa407a3e4ef305e695ea6366f30402c669761ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:49:35 np0005596060 zealous_ritchie[129447]: 167 167
Jan 26 12:49:35 np0005596060 systemd[1]: libpod-dc1f4b996bfa61c49db004a7aa407a3e4ef305e695ea6366f30402c669761ace.scope: Deactivated successfully.
Jan 26 12:49:35 np0005596060 podman[129431]: 2026-01-26 17:49:35.223492553 +0000 UTC m=+0.158501393 container died dc1f4b996bfa61c49db004a7aa407a3e4ef305e695ea6366f30402c669761ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:49:35 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3f057b1d78c81f02f3ae0a201059d2a30d1b3d39cd926a099936eebbce72a408-merged.mount: Deactivated successfully.
Jan 26 12:49:35 np0005596060 podman[129431]: 2026-01-26 17:49:35.264338032 +0000 UTC m=+0.199346872 container remove dc1f4b996bfa61c49db004a7aa407a3e4ef305e695ea6366f30402c669761ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 12:49:35 np0005596060 systemd[1]: libpod-conmon-dc1f4b996bfa61c49db004a7aa407a3e4ef305e695ea6366f30402c669761ace.scope: Deactivated successfully.
Jan 26 12:49:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:35 np0005596060 podman[129474]: 2026-01-26 17:49:35.450749854 +0000 UTC m=+0.055590385 container create 1944dbf582c1224f46ae931bca116bc274920678bf02482cebb224ce595ede2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swirles, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:49:35 np0005596060 systemd[1]: Started libpod-conmon-1944dbf582c1224f46ae931bca116bc274920678bf02482cebb224ce595ede2b.scope.
Jan 26 12:49:35 np0005596060 podman[129474]: 2026-01-26 17:49:35.427374709 +0000 UTC m=+0.032215290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:49:35 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:49:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b8cffeccfa5f25821022e636d4eed82cca8b4ecd4d8ce53dcdc4078565a024/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b8cffeccfa5f25821022e636d4eed82cca8b4ecd4d8ce53dcdc4078565a024/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b8cffeccfa5f25821022e636d4eed82cca8b4ecd4d8ce53dcdc4078565a024/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b8cffeccfa5f25821022e636d4eed82cca8b4ecd4d8ce53dcdc4078565a024/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5b8cffeccfa5f25821022e636d4eed82cca8b4ecd4d8ce53dcdc4078565a024/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:35 np0005596060 podman[129474]: 2026-01-26 17:49:35.550366578 +0000 UTC m=+0.155207139 container init 1944dbf582c1224f46ae931bca116bc274920678bf02482cebb224ce595ede2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swirles, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 12:49:35 np0005596060 podman[129474]: 2026-01-26 17:49:35.558345141 +0000 UTC m=+0.163185712 container start 1944dbf582c1224f46ae931bca116bc274920678bf02482cebb224ce595ede2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 12:49:35 np0005596060 podman[129474]: 2026-01-26 17:49:35.563789469 +0000 UTC m=+0.168630040 container attach 1944dbf582c1224f46ae931bca116bc274920678bf02482cebb224ce595ede2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 26 12:49:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:36.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:36 np0005596060 loving_swirles[129490]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:49:36 np0005596060 loving_swirles[129490]: --> relative data size: 1.0
Jan 26 12:49:36 np0005596060 loving_swirles[129490]: --> All data devices are unavailable
Jan 26 12:49:36 np0005596060 systemd[1]: libpod-1944dbf582c1224f46ae931bca116bc274920678bf02482cebb224ce595ede2b.scope: Deactivated successfully.
Jan 26 12:49:36 np0005596060 podman[129474]: 2026-01-26 17:49:36.411359318 +0000 UTC m=+1.016199869 container died 1944dbf582c1224f46ae931bca116bc274920678bf02482cebb224ce595ede2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 12:49:36 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e5b8cffeccfa5f25821022e636d4eed82cca8b4ecd4d8ce53dcdc4078565a024-merged.mount: Deactivated successfully.
Jan 26 12:49:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:36.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:37 np0005596060 podman[129474]: 2026-01-26 17:49:37.073673316 +0000 UTC m=+1.678513887 container remove 1944dbf582c1224f46ae931bca116bc274920678bf02482cebb224ce595ede2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_swirles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:49:37 np0005596060 systemd[1]: libpod-conmon-1944dbf582c1224f46ae931bca116bc274920678bf02482cebb224ce595ede2b.scope: Deactivated successfully.
Jan 26 12:49:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:37 np0005596060 systemd-logind[786]: New session 44 of user zuul.
Jan 26 12:49:37 np0005596060 systemd[1]: Started Session 44 of User zuul.
Jan 26 12:49:37 np0005596060 podman[129711]: 2026-01-26 17:49:37.772113012 +0000 UTC m=+0.053407369 container create d0428183d88823d91dd66207c9633c1ec13087a4bc7abc74becefa98cacfa54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_clarke, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:49:37 np0005596060 systemd[1]: Started libpod-conmon-d0428183d88823d91dd66207c9633c1ec13087a4bc7abc74becefa98cacfa54b.scope.
Jan 26 12:49:37 np0005596060 podman[129711]: 2026-01-26 17:49:37.746629884 +0000 UTC m=+0.027924271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:49:37 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:49:37 np0005596060 podman[129711]: 2026-01-26 17:49:37.864092932 +0000 UTC m=+0.145387289 container init d0428183d88823d91dd66207c9633c1ec13087a4bc7abc74becefa98cacfa54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_clarke, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:49:37 np0005596060 podman[129711]: 2026-01-26 17:49:37.872907856 +0000 UTC m=+0.154202223 container start d0428183d88823d91dd66207c9633c1ec13087a4bc7abc74becefa98cacfa54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_clarke, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 12:49:37 np0005596060 podman[129711]: 2026-01-26 17:49:37.876259061 +0000 UTC m=+0.157553418 container attach d0428183d88823d91dd66207c9633c1ec13087a4bc7abc74becefa98cacfa54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 12:49:37 np0005596060 boring_clarke[129729]: 167 167
Jan 26 12:49:37 np0005596060 systemd[1]: libpod-d0428183d88823d91dd66207c9633c1ec13087a4bc7abc74becefa98cacfa54b.scope: Deactivated successfully.
Jan 26 12:49:37 np0005596060 podman[129711]: 2026-01-26 17:49:37.881264739 +0000 UTC m=+0.162559106 container died d0428183d88823d91dd66207c9633c1ec13087a4bc7abc74becefa98cacfa54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 12:49:37 np0005596060 systemd[1]: var-lib-containers-storage-overlay-91f58f596d345cf282a469c6f882d8917817a62d6b107fe56527d925acef5a04-merged.mount: Deactivated successfully.
Jan 26 12:49:37 np0005596060 podman[129711]: 2026-01-26 17:49:37.926036107 +0000 UTC m=+0.207330464 container remove d0428183d88823d91dd66207c9633c1ec13087a4bc7abc74becefa98cacfa54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:49:37 np0005596060 systemd[1]: libpod-conmon-d0428183d88823d91dd66207c9633c1ec13087a4bc7abc74becefa98cacfa54b.scope: Deactivated successfully.
Jan 26 12:49:38 np0005596060 podman[129804]: 2026-01-26 17:49:38.141390305 +0000 UTC m=+0.094636058 container create 2cf370e9e001bcd94dc9407516646ecd82cd35027e08a1b4c2aebce5321b884b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:49:38 np0005596060 podman[129804]: 2026-01-26 17:49:38.072842052 +0000 UTC m=+0.026087845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:49:38 np0005596060 systemd[1]: Started libpod-conmon-2cf370e9e001bcd94dc9407516646ecd82cd35027e08a1b4c2aebce5321b884b.scope.
Jan 26 12:49:38 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:49:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046a7c5467ee32bdf5b0752999357b4dc00a2b0f46d0391fe0684d3362e6a2e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046a7c5467ee32bdf5b0752999357b4dc00a2b0f46d0391fe0684d3362e6a2e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046a7c5467ee32bdf5b0752999357b4dc00a2b0f46d0391fe0684d3362e6a2e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/046a7c5467ee32bdf5b0752999357b4dc00a2b0f46d0391fe0684d3362e6a2e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:38 np0005596060 podman[129804]: 2026-01-26 17:49:38.241729678 +0000 UTC m=+0.194975441 container init 2cf370e9e001bcd94dc9407516646ecd82cd35027e08a1b4c2aebce5321b884b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 12:49:38 np0005596060 podman[129804]: 2026-01-26 17:49:38.249724901 +0000 UTC m=+0.202970664 container start 2cf370e9e001bcd94dc9407516646ecd82cd35027e08a1b4c2aebce5321b884b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Jan 26 12:49:38 np0005596060 podman[129804]: 2026-01-26 17:49:38.255798366 +0000 UTC m=+0.209044159 container attach 2cf370e9e001bcd94dc9407516646ecd82cd35027e08a1b4c2aebce5321b884b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 12:49:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:38.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:38 np0005596060 python3.9[129923]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:49:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:38.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:39 np0005596060 cranky_jang[129821]: {
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:    "1": [
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:        {
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "devices": [
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "/dev/loop3"
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            ],
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "lv_name": "ceph_lv0",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "lv_size": "7511998464",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "name": "ceph_lv0",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "tags": {
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.cluster_name": "ceph",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.crush_device_class": "",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.encrypted": "0",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.osd_id": "1",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.type": "block",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:                "ceph.vdo": "0"
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            },
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "type": "block",
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:            "vg_name": "ceph_vg0"
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:        }
Jan 26 12:49:39 np0005596060 cranky_jang[129821]:    ]
Jan 26 12:49:39 np0005596060 cranky_jang[129821]: }
Jan 26 12:49:39 np0005596060 systemd[1]: libpod-2cf370e9e001bcd94dc9407516646ecd82cd35027e08a1b4c2aebce5321b884b.scope: Deactivated successfully.
Jan 26 12:49:39 np0005596060 conmon[129821]: conmon 2cf370e9e001bcd94dc9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2cf370e9e001bcd94dc9407516646ecd82cd35027e08a1b4c2aebce5321b884b.scope/container/memory.events
Jan 26 12:49:39 np0005596060 podman[129804]: 2026-01-26 17:49:39.144851481 +0000 UTC m=+1.098097254 container died 2cf370e9e001bcd94dc9407516646ecd82cd35027e08a1b4c2aebce5321b884b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:49:39 np0005596060 systemd[1]: var-lib-containers-storage-overlay-046a7c5467ee32bdf5b0752999357b4dc00a2b0f46d0391fe0684d3362e6a2e3-merged.mount: Deactivated successfully.
Jan 26 12:49:39 np0005596060 podman[129804]: 2026-01-26 17:49:39.245674365 +0000 UTC m=+1.198920128 container remove 2cf370e9e001bcd94dc9407516646ecd82cd35027e08a1b4c2aebce5321b884b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:49:39 np0005596060 systemd[1]: libpod-conmon-2cf370e9e001bcd94dc9407516646ecd82cd35027e08a1b4c2aebce5321b884b.scope: Deactivated successfully.
Jan 26 12:49:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:39 np0005596060 podman[130123]: 2026-01-26 17:49:39.968889691 +0000 UTC m=+0.060484110 container create 1e8476aea2211dd0af8a26490262230896cd895c3ac0f1841ea468029fab2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:49:40 np0005596060 systemd[1]: Started libpod-conmon-1e8476aea2211dd0af8a26490262230896cd895c3ac0f1841ea468029fab2a66.scope.
Jan 26 12:49:40 np0005596060 podman[130123]: 2026-01-26 17:49:39.93702195 +0000 UTC m=+0.028616449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:49:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:49:40 np0005596060 podman[130123]: 2026-01-26 17:49:40.067889459 +0000 UTC m=+0.159483868 container init 1e8476aea2211dd0af8a26490262230896cd895c3ac0f1841ea468029fab2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 26 12:49:40 np0005596060 podman[130123]: 2026-01-26 17:49:40.076849917 +0000 UTC m=+0.168444326 container start 1e8476aea2211dd0af8a26490262230896cd895c3ac0f1841ea468029fab2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:49:40 np0005596060 podman[130123]: 2026-01-26 17:49:40.079858153 +0000 UTC m=+0.171452552 container attach 1e8476aea2211dd0af8a26490262230896cd895c3ac0f1841ea468029fab2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:49:40 np0005596060 jolly_blackwell[130177]: 167 167
Jan 26 12:49:40 np0005596060 systemd[1]: libpod-1e8476aea2211dd0af8a26490262230896cd895c3ac0f1841ea468029fab2a66.scope: Deactivated successfully.
Jan 26 12:49:40 np0005596060 podman[130123]: 2026-01-26 17:49:40.084775599 +0000 UTC m=+0.176370008 container died 1e8476aea2211dd0af8a26490262230896cd895c3ac0f1841ea468029fab2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 12:49:40 np0005596060 systemd[1]: var-lib-containers-storage-overlay-907f1b3b4be2c482666e51be0b1c2be4cdaa16dde347531575949d52b21aad80-merged.mount: Deactivated successfully.
Jan 26 12:49:40 np0005596060 podman[130123]: 2026-01-26 17:49:40.128499931 +0000 UTC m=+0.220094340 container remove 1e8476aea2211dd0af8a26490262230896cd895c3ac0f1841ea468029fab2a66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:49:40 np0005596060 systemd[1]: libpod-conmon-1e8476aea2211dd0af8a26490262230896cd895c3ac0f1841ea468029fab2a66.scope: Deactivated successfully.
Jan 26 12:49:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:40.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:40 np0005596060 podman[130247]: 2026-01-26 17:49:40.326020025 +0000 UTC m=+0.061297960 container create 3a8d7cbecd4b91a58b973cc80841d0a9beb29be332ab32d3ef27eb03d47d5c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 12:49:40 np0005596060 systemd[1]: Started libpod-conmon-3a8d7cbecd4b91a58b973cc80841d0a9beb29be332ab32d3ef27eb03d47d5c34.scope.
Jan 26 12:49:40 np0005596060 podman[130247]: 2026-01-26 17:49:40.305747909 +0000 UTC m=+0.041025874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:49:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:49:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99a9046ef572b8431f5e7cfa30ae9ead6c6c0e7dad3b9691f3fbf0bfc69488e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99a9046ef572b8431f5e7cfa30ae9ead6c6c0e7dad3b9691f3fbf0bfc69488e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99a9046ef572b8431f5e7cfa30ae9ead6c6c0e7dad3b9691f3fbf0bfc69488e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99a9046ef572b8431f5e7cfa30ae9ead6c6c0e7dad3b9691f3fbf0bfc69488e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:49:40 np0005596060 podman[130247]: 2026-01-26 17:49:40.423641398 +0000 UTC m=+0.158919353 container init 3a8d7cbecd4b91a58b973cc80841d0a9beb29be332ab32d3ef27eb03d47d5c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:49:40 np0005596060 podman[130247]: 2026-01-26 17:49:40.431741954 +0000 UTC m=+0.167019889 container start 3a8d7cbecd4b91a58b973cc80841d0a9beb29be332ab32d3ef27eb03d47d5c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 12:49:40 np0005596060 podman[130247]: 2026-01-26 17:49:40.435706925 +0000 UTC m=+0.170984890 container attach 3a8d7cbecd4b91a58b973cc80841d0a9beb29be332ab32d3ef27eb03d47d5c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 12:49:40 np0005596060 python3.9[130291]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:49:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:40.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:41 np0005596060 python3.9[130450]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:49:41 np0005596060 charming_cohen[130294]: {
Jan 26 12:49:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:41 np0005596060 charming_cohen[130294]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:49:41 np0005596060 charming_cohen[130294]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:49:41 np0005596060 charming_cohen[130294]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:49:41 np0005596060 charming_cohen[130294]:        "osd_id": 1,
Jan 26 12:49:41 np0005596060 charming_cohen[130294]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:49:41 np0005596060 charming_cohen[130294]:        "type": "bluestore"
Jan 26 12:49:41 np0005596060 charming_cohen[130294]:    }
Jan 26 12:49:41 np0005596060 charming_cohen[130294]: }
Jan 26 12:49:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:41 np0005596060 systemd[1]: libpod-3a8d7cbecd4b91a58b973cc80841d0a9beb29be332ab32d3ef27eb03d47d5c34.scope: Deactivated successfully.
Jan 26 12:49:41 np0005596060 podman[130247]: 2026-01-26 17:49:41.404809586 +0000 UTC m=+1.140087521 container died 3a8d7cbecd4b91a58b973cc80841d0a9beb29be332ab32d3ef27eb03d47d5c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 12:49:41 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c99a9046ef572b8431f5e7cfa30ae9ead6c6c0e7dad3b9691f3fbf0bfc69488e-merged.mount: Deactivated successfully.
Jan 26 12:49:41 np0005596060 podman[130247]: 2026-01-26 17:49:41.463356886 +0000 UTC m=+1.198634821 container remove 3a8d7cbecd4b91a58b973cc80841d0a9beb29be332ab32d3ef27eb03d47d5c34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cohen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 12:49:41 np0005596060 systemd[1]: libpod-conmon-3a8d7cbecd4b91a58b973cc80841d0a9beb29be332ab32d3ef27eb03d47d5c34.scope: Deactivated successfully.
Jan 26 12:49:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:49:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:49:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:49:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:49:41 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7664a374-d789-4307-bc36-fc165e1b34a9 does not exist
Jan 26 12:49:41 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev bd601dec-7b00-4731-96b0-bc1d15eecdc9 does not exist
Jan 26 12:49:41 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev bebff1a3-c644-4e10-8502-64e1e98d0116 does not exist
Jan 26 12:49:42 np0005596060 python3.9[130682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:42.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:49:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:49:42 np0005596060 python3.9[130805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449781.4938977-158-11615001203591/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d733b2998c3171ac062968a2d72fad6b6c622348 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:42.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:43 np0005596060 python3.9[130957]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:43 np0005596060 python3.9[131081]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449782.8337288-158-129865346010031/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7f2574ee4e0949b6273e9da9f87244a58b33ceaa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:49:43
Jan 26 12:49:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:49:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:49:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'volumes', '.mgr', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log']
Jan 26 12:49:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:49:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:49:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:44.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:44 np0005596060 python3.9[131233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:44.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:45 np0005596060 python3.9[131356]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449783.9883156-158-156565907839207/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f12a6a98f0b6dd5b2cac0c6d592b0c4b7b020719 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:45 np0005596060 python3.9[131509]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:49:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:46.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:46 np0005596060 python3.9[131661]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:49:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:46.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:47 np0005596060 python3.9[131813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:47 np0005596060 python3.9[131937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449786.6198123-333-204048097705602/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=42089a5edb5e8f61d1df01dcf5964b663dc62363 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:48.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:48 np0005596060 python3.9[132089]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:48 np0005596060 python3.9[132212]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449787.8095868-333-136529561994305/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b45673754a8986e9f47c98c3e35456cb9dfc3d3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:48.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:49 np0005596060 python3.9[132365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:50 np0005596060 python3.9[132488]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449789.045376-333-202070386081768/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=34c22e0d93a50b6c12fb2029ddc097faaa9c6a74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:50.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:50 np0005596060 python3.9[132640]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:49:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:50.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:51 np0005596060 python3.9[132792]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:49:52 np0005596060 python3.9[132945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:52.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:52 np0005596060 python3.9[133068]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449791.610104-511-246507672348743/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=865deb854917f0111fe686460db9afd1499d36be backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:52.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:53 np0005596060 python3.9[133220]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:53 np0005596060 python3.9[133344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449792.9065175-511-260974855936470/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b45673754a8986e9f47c98c3e35456cb9dfc3d3e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:49:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:54.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:49:54 np0005596060 python3.9[133496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:49:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:54.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:49:55 np0005596060 python3.9[133619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449794.0906613-511-252617987671178/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=547b51ad5e87932aaa9fd9f773b3022c873e1762 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:56 np0005596060 python3.9[133772]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:49:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:56.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:49:56 np0005596060 python3.9[133972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:56.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:57 np0005596060 python3.9[134097]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449796.3979075-711-103359971487897/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9c020ad993969d6201452a9427187b11fbbe4910 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:58 np0005596060 python3.9[134250]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:49:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:49:58.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:58 np0005596060 python3.9[134402]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:49:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:49:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:49:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:49:58.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:49:59 np0005596060 python3.9[134525]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449798.251021-778-75081778697475/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9c020ad993969d6201452a9427187b11fbbe4910 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:49:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:49:59 np0005596060 python3.9[134678]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:50:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 12:50:00 np0005596060 ceph-mon[74267]: overall HEALTH_OK
Jan 26 12:50:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:50:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:00.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:50:00 np0005596060 python3.9[134830]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:00.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:01 np0005596060 python3.9[134953]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449800.1687899-846-175828272135708/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9c020ad993969d6201452a9427187b11fbbe4910 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:02 np0005596060 python3.9[135106]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:50:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:02.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:02 np0005596060 python3.9[135258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:02.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:03 np0005596060 python3.9[135381]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449802.211225-922-61605233455939/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9c020ad993969d6201452a9427187b11fbbe4910 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:50:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:04 np0005596060 python3.9[135534]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:50:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:04.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:04 np0005596060 python3.9[135686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:04.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:05 np0005596060 python3.9[135809]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449804.2252946-996-141187587895617/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9c020ad993969d6201452a9427187b11fbbe4910 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:06 np0005596060 python3.9[135962]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:50:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:06.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:06 np0005596060 python3.9[136114]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:06.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:07 np0005596060 python3.9[136237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449806.191282-1066-179207190610031/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=9c020ad993969d6201452a9427187b11fbbe4910 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:08.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:08 np0005596060 systemd[1]: session-44.scope: Deactivated successfully.
Jan 26 12:50:08 np0005596060 systemd[1]: session-44.scope: Consumed 24.259s CPU time.
Jan 26 12:50:08 np0005596060 systemd-logind[786]: Session 44 logged out. Waiting for processes to exit.
Jan 26 12:50:08 np0005596060 systemd-logind[786]: Removed session 44.
Jan 26 12:50:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:08.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:10.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:50:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:10.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:50:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:50:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:12.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:50:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:13.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:50:14 np0005596060 systemd-logind[786]: New session 45 of user zuul.
Jan 26 12:50:14 np0005596060 systemd[1]: Started Session 45 of User zuul.
Jan 26 12:50:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:14.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 12:50:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:15.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 12:50:15 np0005596060 python3.9[136421]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:15 np0005596060 python3.9[136574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:16.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:16 np0005596060 python3.9[136697]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769449815.241886-62-121822409543020/.source.conf _original_basename=ceph.conf follow=False checksum=1cb6012f361f0c9e471f352b73a07eaa73c38d31 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 12:50:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:17.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 12:50:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:17 np0005596060 python3.9[136900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:18.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:18 np0005596060 python3.9[137023]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769449817.2980568-62-125649681853063/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=395d1c083c7c30077cae22673689037cb8c534c6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:19.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:19 np0005596060 systemd[1]: session-45.scope: Deactivated successfully.
Jan 26 12:50:19 np0005596060 systemd[1]: session-45.scope: Consumed 2.899s CPU time.
Jan 26 12:50:19 np0005596060 systemd-logind[786]: Session 45 logged out. Waiting for processes to exit.
Jan 26 12:50:19 np0005596060 systemd-logind[786]: Removed session 45.
Jan 26 12:50:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:50:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:20.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:50:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:21.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:22.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:23.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:24.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:25.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:25 np0005596060 systemd-logind[786]: New session 46 of user zuul.
Jan 26 12:50:25 np0005596060 systemd[1]: Started Session 46 of User zuul.
Jan 26 12:50:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:26.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:26 np0005596060 python3.9[137205]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:50:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:27.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:28 np0005596060 python3.9[137362]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:50:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:50:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:28.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:50:28 np0005596060 python3.9[137514]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:50:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 12:50:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:29.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 12:50:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:29 np0005596060 python3.9[137664]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:50:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:30.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:30 np0005596060 python3.9[137817]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 26 12:50:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:31.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:32.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 12:50:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:33.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 12:50:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:34.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 12:50:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:35.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 12:50:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:36 np0005596060 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 26 12:50:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:36.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:36 np0005596060 python3.9[137977]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:50:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:37 np0005596060 python3.9[138111]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:50:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:38.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:39.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:39 np0005596060 python3.9[138266]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 12:50:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:40.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:41 np0005596060 python3[138421]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 26 12:50:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:41.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:41 np0005596060 python3.9[138574]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:50:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:42.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:42 np0005596060 python3.9[138851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:43 np0005596060 python3.9[139039]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:43 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3138d967-c436-4431-88ef-24f0fd57d66f does not exist
Jan 26 12:50:43 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 786df1cc-9db3-4920-8eea-46af66d6becf does not exist
Jan 26 12:50:43 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev fec5061e-21e3-489e-b3f8-bc82c2f33b78 does not exist
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:43 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:50:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:50:43
Jan 26 12:50:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:50:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:50:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'vms', '.mgr', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.control']
Jan 26 12:50:43 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:50:44 np0005596060 podman[139349]: 2026-01-26 17:50:44.058840927 +0000 UTC m=+0.056295690 container create d374311d82a2f09c597a4bcccabbf80cc1e3719d5cebdf2db0978d5184bc8474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:50:44 np0005596060 python3.9[139310]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:44 np0005596060 systemd[1]: Started libpod-conmon-d374311d82a2f09c597a4bcccabbf80cc1e3719d5cebdf2db0978d5184bc8474.scope.
Jan 26 12:50:44 np0005596060 podman[139349]: 2026-01-26 17:50:44.031413662 +0000 UTC m=+0.028868455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:50:44 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:50:44 np0005596060 podman[139349]: 2026-01-26 17:50:44.178465059 +0000 UTC m=+0.175919852 container init d374311d82a2f09c597a4bcccabbf80cc1e3719d5cebdf2db0978d5184bc8474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 12:50:44 np0005596060 podman[139349]: 2026-01-26 17:50:44.187837827 +0000 UTC m=+0.185292580 container start d374311d82a2f09c597a4bcccabbf80cc1e3719d5cebdf2db0978d5184bc8474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_margulis, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:50:44 np0005596060 podman[139349]: 2026-01-26 17:50:44.191897044 +0000 UTC m=+0.189351827 container attach d374311d82a2f09c597a4bcccabbf80cc1e3719d5cebdf2db0978d5184bc8474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_margulis, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 12:50:44 np0005596060 systemd[1]: libpod-d374311d82a2f09c597a4bcccabbf80cc1e3719d5cebdf2db0978d5184bc8474.scope: Deactivated successfully.
Jan 26 12:50:44 np0005596060 frosty_margulis[139367]: 167 167
Jan 26 12:50:44 np0005596060 conmon[139367]: conmon d374311d82a2f09c597a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d374311d82a2f09c597a4bcccabbf80cc1e3719d5cebdf2db0978d5184bc8474.scope/container/memory.events
Jan 26 12:50:44 np0005596060 podman[139349]: 2026-01-26 17:50:44.19930148 +0000 UTC m=+0.196756253 container died d374311d82a2f09c597a4bcccabbf80cc1e3719d5cebdf2db0978d5184bc8474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 12:50:44 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f4b8f3af4799bbbc7acbecd5da6a10763f1b20f54db3fab395d6bf9c35cb5804-merged.mount: Deactivated successfully.
Jan 26 12:50:44 np0005596060 podman[139349]: 2026-01-26 17:50:44.24621255 +0000 UTC m=+0.243667313 container remove d374311d82a2f09c597a4bcccabbf80cc1e3719d5cebdf2db0978d5184bc8474 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:50:44 np0005596060 systemd[1]: libpod-conmon-d374311d82a2f09c597a4bcccabbf80cc1e3719d5cebdf2db0978d5184bc8474.scope: Deactivated successfully.
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:50:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:50:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:44.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:44 np0005596060 podman[139466]: 2026-01-26 17:50:44.420927579 +0000 UTC m=+0.049134820 container create 0757e487acd36fc256d7c9ec76cb3409557375008fe8b14595aeb3d20932d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:50:44 np0005596060 systemd[1]: Started libpod-conmon-0757e487acd36fc256d7c9ec76cb3409557375008fe8b14595aeb3d20932d632.scope.
Jan 26 12:50:44 np0005596060 podman[139466]: 2026-01-26 17:50:44.399981185 +0000 UTC m=+0.028188466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:50:44 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:50:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d4ce11b27e32afa2650cc3c265a7368c1699bebb9606ac5b9dc50bf97f0606/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d4ce11b27e32afa2650cc3c265a7368c1699bebb9606ac5b9dc50bf97f0606/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d4ce11b27e32afa2650cc3c265a7368c1699bebb9606ac5b9dc50bf97f0606/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d4ce11b27e32afa2650cc3c265a7368c1699bebb9606ac5b9dc50bf97f0606/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70d4ce11b27e32afa2650cc3c265a7368c1699bebb9606ac5b9dc50bf97f0606/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:44 np0005596060 podman[139466]: 2026-01-26 17:50:44.535649272 +0000 UTC m=+0.163856533 container init 0757e487acd36fc256d7c9ec76cb3409557375008fe8b14595aeb3d20932d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 12:50:44 np0005596060 podman[139466]: 2026-01-26 17:50:44.547772872 +0000 UTC m=+0.175980113 container start 0757e487acd36fc256d7c9ec76cb3409557375008fe8b14595aeb3d20932d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 12:50:44 np0005596060 podman[139466]: 2026-01-26 17:50:44.551838659 +0000 UTC m=+0.180045900 container attach 0757e487acd36fc256d7c9ec76cb3409557375008fe8b14595aeb3d20932d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 12:50:44 np0005596060 python3.9[139473]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2re3szpp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:50:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:45.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:50:45 np0005596060 musing_jones[139484]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:50:45 np0005596060 musing_jones[139484]: --> relative data size: 1.0
Jan 26 12:50:45 np0005596060 musing_jones[139484]: --> All data devices are unavailable
Jan 26 12:50:45 np0005596060 systemd[1]: libpod-0757e487acd36fc256d7c9ec76cb3409557375008fe8b14595aeb3d20932d632.scope: Deactivated successfully.
Jan 26 12:50:45 np0005596060 podman[139466]: 2026-01-26 17:50:45.368941001 +0000 UTC m=+0.997148252 container died 0757e487acd36fc256d7c9ec76cb3409557375008fe8b14595aeb3d20932d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 12:50:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-70d4ce11b27e32afa2650cc3c265a7368c1699bebb9606ac5b9dc50bf97f0606-merged.mount: Deactivated successfully.
Jan 26 12:50:45 np0005596060 podman[139466]: 2026-01-26 17:50:45.435322226 +0000 UTC m=+1.063529467 container remove 0757e487acd36fc256d7c9ec76cb3409557375008fe8b14595aeb3d20932d632 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:50:45 np0005596060 systemd[1]: libpod-conmon-0757e487acd36fc256d7c9ec76cb3409557375008fe8b14595aeb3d20932d632.scope: Deactivated successfully.
Jan 26 12:50:45 np0005596060 python3.9[139643]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:46 np0005596060 python3.9[139840]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:46 np0005596060 podman[139881]: 2026-01-26 17:50:46.080229985 +0000 UTC m=+0.039736832 container create e5e45f3c0c819360d5fa6db348058348e9a720510eac0262f8aa5ae8a755d03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:50:46 np0005596060 systemd[1]: Started libpod-conmon-e5e45f3c0c819360d5fa6db348058348e9a720510eac0262f8aa5ae8a755d03e.scope.
Jan 26 12:50:46 np0005596060 podman[139881]: 2026-01-26 17:50:46.063825591 +0000 UTC m=+0.023332458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:50:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:50:46 np0005596060 podman[139881]: 2026-01-26 17:50:46.181397819 +0000 UTC m=+0.140904696 container init e5e45f3c0c819360d5fa6db348058348e9a720510eac0262f8aa5ae8a755d03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:50:46 np0005596060 podman[139881]: 2026-01-26 17:50:46.188098636 +0000 UTC m=+0.147605483 container start e5e45f3c0c819360d5fa6db348058348e9a720510eac0262f8aa5ae8a755d03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 12:50:46 np0005596060 podman[139881]: 2026-01-26 17:50:46.192112782 +0000 UTC m=+0.151619659 container attach e5e45f3c0c819360d5fa6db348058348e9a720510eac0262f8aa5ae8a755d03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 12:50:46 np0005596060 hungry_lumiere[139922]: 167 167
Jan 26 12:50:46 np0005596060 podman[139881]: 2026-01-26 17:50:46.194464215 +0000 UTC m=+0.153971062 container died e5e45f3c0c819360d5fa6db348058348e9a720510eac0262f8aa5ae8a755d03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:50:46 np0005596060 systemd[1]: libpod-e5e45f3c0c819360d5fa6db348058348e9a720510eac0262f8aa5ae8a755d03e.scope: Deactivated successfully.
Jan 26 12:50:46 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2be56f9437f8337057c336996f2ff17153aa75c9efc306191c61187de7a05ff2-merged.mount: Deactivated successfully.
Jan 26 12:50:46 np0005596060 podman[139881]: 2026-01-26 17:50:46.23286708 +0000 UTC m=+0.192373937 container remove e5e45f3c0c819360d5fa6db348058348e9a720510eac0262f8aa5ae8a755d03e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lumiere, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:50:46 np0005596060 systemd[1]: libpod-conmon-e5e45f3c0c819360d5fa6db348058348e9a720510eac0262f8aa5ae8a755d03e.scope: Deactivated successfully.
Jan 26 12:50:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:46 np0005596060 podman[139992]: 2026-01-26 17:50:46.409380646 +0000 UTC m=+0.048701538 container create 6cc8b988f2881c12a614d437275b3bdf42179b6ed6e7ab59e910f650254125a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:50:46 np0005596060 systemd[1]: Started libpod-conmon-6cc8b988f2881c12a614d437275b3bdf42179b6ed6e7ab59e910f650254125a0.scope.
Jan 26 12:50:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:50:46 np0005596060 podman[139992]: 2026-01-26 17:50:46.388452183 +0000 UTC m=+0.027773125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:50:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a284504862a5cb0db4a0eb3dfc908f26fcb17a15a1d4c658adac188a29d0ea22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a284504862a5cb0db4a0eb3dfc908f26fcb17a15a1d4c658adac188a29d0ea22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a284504862a5cb0db4a0eb3dfc908f26fcb17a15a1d4c658adac188a29d0ea22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a284504862a5cb0db4a0eb3dfc908f26fcb17a15a1d4c658adac188a29d0ea22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:46 np0005596060 podman[139992]: 2026-01-26 17:50:46.500247978 +0000 UTC m=+0.139568900 container init 6cc8b988f2881c12a614d437275b3bdf42179b6ed6e7ab59e910f650254125a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 26 12:50:46 np0005596060 podman[139992]: 2026-01-26 17:50:46.509884383 +0000 UTC m=+0.149205275 container start 6cc8b988f2881c12a614d437275b3bdf42179b6ed6e7ab59e910f650254125a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 26 12:50:46 np0005596060 podman[139992]: 2026-01-26 17:50:46.513416416 +0000 UTC m=+0.152737308 container attach 6cc8b988f2881c12a614d437275b3bdf42179b6ed6e7ab59e910f650254125a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:50:46 np0005596060 python3.9[140097]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:50:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:47.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]: {
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:    "1": [
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:        {
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "devices": [
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "/dev/loop3"
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            ],
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "lv_name": "ceph_lv0",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "lv_size": "7511998464",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "name": "ceph_lv0",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "tags": {
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.cluster_name": "ceph",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.crush_device_class": "",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.encrypted": "0",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.osd_id": "1",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.type": "block",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:                "ceph.vdo": "0"
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            },
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "type": "block",
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:            "vg_name": "ceph_vg0"
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:        }
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]:    ]
Jan 26 12:50:47 np0005596060 stupefied_elion[140017]: }
Jan 26 12:50:47 np0005596060 systemd[1]: libpod-6cc8b988f2881c12a614d437275b3bdf42179b6ed6e7ab59e910f650254125a0.scope: Deactivated successfully.
Jan 26 12:50:47 np0005596060 podman[139992]: 2026-01-26 17:50:47.300436242 +0000 UTC m=+0.939757224 container died 6cc8b988f2881c12a614d437275b3bdf42179b6ed6e7ab59e910f650254125a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 12:50:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:47 np0005596060 python3[140266]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 12:50:48 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a284504862a5cb0db4a0eb3dfc908f26fcb17a15a1d4c658adac188a29d0ea22-merged.mount: Deactivated successfully.
Jan 26 12:50:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:48.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:48 np0005596060 podman[139992]: 2026-01-26 17:50:48.662510149 +0000 UTC m=+2.301831031 container remove 6cc8b988f2881c12a614d437275b3bdf42179b6ed6e7ab59e910f650254125a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elion, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 26 12:50:48 np0005596060 python3.9[140419]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:48 np0005596060 systemd[1]: libpod-conmon-6cc8b988f2881c12a614d437275b3bdf42179b6ed6e7ab59e910f650254125a0.scope: Deactivated successfully.
Jan 26 12:50:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:49.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:49 np0005596060 python3.9[140669]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449848.1072638-432-138732160297532/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:49 np0005596060 podman[140684]: 2026-01-26 17:50:49.234588112 +0000 UTC m=+0.025456944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:50:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:49 np0005596060 podman[140684]: 2026-01-26 17:50:49.557765816 +0000 UTC m=+0.348634618 container create cb303c6fb168b539b6a44f5f7c407290e373ed359403934c5ac37d151413906e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 12:50:50 np0005596060 systemd[1]: Started libpod-conmon-cb303c6fb168b539b6a44f5f7c407290e373ed359403934c5ac37d151413906e.scope.
Jan 26 12:50:50 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:50:50 np0005596060 podman[140684]: 2026-01-26 17:50:50.14309817 +0000 UTC m=+0.933966992 container init cb303c6fb168b539b6a44f5f7c407290e373ed359403934c5ac37d151413906e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 12:50:50 np0005596060 podman[140684]: 2026-01-26 17:50:50.151157543 +0000 UTC m=+0.942026345 container start cb303c6fb168b539b6a44f5f7c407290e373ed359403934c5ac37d151413906e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:50:50 np0005596060 python3.9[140850]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:50 np0005596060 jovial_sanderson[140853]: 167 167
Jan 26 12:50:50 np0005596060 systemd[1]: libpod-cb303c6fb168b539b6a44f5f7c407290e373ed359403934c5ac37d151413906e.scope: Deactivated successfully.
Jan 26 12:50:50 np0005596060 podman[140684]: 2026-01-26 17:50:50.303664904 +0000 UTC m=+1.094533726 container attach cb303c6fb168b539b6a44f5f7c407290e373ed359403934c5ac37d151413906e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:50:50 np0005596060 podman[140684]: 2026-01-26 17:50:50.304086645 +0000 UTC m=+1.094955457 container died cb303c6fb168b539b6a44f5f7c407290e373ed359403934c5ac37d151413906e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 12:50:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:50:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:50.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:50:50 np0005596060 python3.9[140992]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449849.619747-477-87859129754373/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:51.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:51 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d3f9017f61f94221b5c7eb15be109567a48a2190348ee102bf1770c6e90db2f4-merged.mount: Deactivated successfully.
Jan 26 12:50:51 np0005596060 python3.9[141146]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:52 np0005596060 python3.9[141271]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449851.1493363-522-91876683966954/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:50:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:52.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:50:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:53 np0005596060 python3.9[141423]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:53 np0005596060 python3.9[141549]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449852.6103625-567-3820642459230/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:54.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:54 np0005596060 python3.9[141701]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:50:55 np0005596060 podman[140684]: 2026-01-26 17:50:55.046038393 +0000 UTC m=+5.836907225 container remove cb303c6fb168b539b6a44f5f7c407290e373ed359403934c5ac37d151413906e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:50:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:55.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:55 np0005596060 systemd[1]: libpod-conmon-cb303c6fb168b539b6a44f5f7c407290e373ed359403934c5ac37d151413906e.scope: Deactivated successfully.
Jan 26 12:50:55 np0005596060 podman[141802]: 2026-01-26 17:50:55.245725212 +0000 UTC m=+0.030571599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:50:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:55 np0005596060 python3.9[141848]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769449854.0150936-612-99008832047821/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:55 np0005596060 podman[141802]: 2026-01-26 17:50:55.865871556 +0000 UTC m=+0.650717903 container create 37b6f9a0d0b57e63935ff55dcbf63374f0e27793f2d87aac1c6b659fae5bcf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:50:55 np0005596060 systemd[1]: Started libpod-conmon-37b6f9a0d0b57e63935ff55dcbf63374f0e27793f2d87aac1c6b659fae5bcf18.scope.
Jan 26 12:50:55 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:50:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746093cb566e8a780fb43d72f8d4e33a0511c98540eb4a71abb1ac1ae56768b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746093cb566e8a780fb43d72f8d4e33a0511c98540eb4a71abb1ac1ae56768b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746093cb566e8a780fb43d72f8d4e33a0511c98540eb4a71abb1ac1ae56768b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746093cb566e8a780fb43d72f8d4e33a0511c98540eb4a71abb1ac1ae56768b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:50:55 np0005596060 podman[141802]: 2026-01-26 17:50:55.985351775 +0000 UTC m=+0.770198152 container init 37b6f9a0d0b57e63935ff55dcbf63374f0e27793f2d87aac1c6b659fae5bcf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:50:55 np0005596060 podman[141802]: 2026-01-26 17:50:55.997987449 +0000 UTC m=+0.782833796 container start 37b6f9a0d0b57e63935ff55dcbf63374f0e27793f2d87aac1c6b659fae5bcf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 12:50:56 np0005596060 podman[141802]: 2026-01-26 17:50:56.035820279 +0000 UTC m=+0.820666626 container attach 37b6f9a0d0b57e63935ff55dcbf63374f0e27793f2d87aac1c6b659fae5bcf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 12:50:56 np0005596060 python3.9[142008]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:56.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:50:56 np0005596060 condescending_mcnulty[141957]: {
Jan 26 12:50:56 np0005596060 condescending_mcnulty[141957]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:50:56 np0005596060 condescending_mcnulty[141957]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:50:56 np0005596060 condescending_mcnulty[141957]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:50:56 np0005596060 condescending_mcnulty[141957]:        "osd_id": 1,
Jan 26 12:50:56 np0005596060 condescending_mcnulty[141957]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:50:56 np0005596060 condescending_mcnulty[141957]:        "type": "bluestore"
Jan 26 12:50:56 np0005596060 condescending_mcnulty[141957]:    }
Jan 26 12:50:56 np0005596060 condescending_mcnulty[141957]: }
Jan 26 12:50:56 np0005596060 systemd[1]: libpod-37b6f9a0d0b57e63935ff55dcbf63374f0e27793f2d87aac1c6b659fae5bcf18.scope: Deactivated successfully.
Jan 26 12:50:56 np0005596060 podman[141802]: 2026-01-26 17:50:56.896853622 +0000 UTC m=+1.681699989 container died 37b6f9a0d0b57e63935ff55dcbf63374f0e27793f2d87aac1c6b659fae5bcf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 26 12:50:56 np0005596060 systemd[1]: var-lib-containers-storage-overlay-746093cb566e8a780fb43d72f8d4e33a0511c98540eb4a71abb1ac1ae56768b4-merged.mount: Deactivated successfully.
Jan 26 12:50:56 np0005596060 python3.9[142168]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:50:56 np0005596060 podman[141802]: 2026-01-26 17:50:56.975309036 +0000 UTC m=+1.760155383 container remove 37b6f9a0d0b57e63935ff55dcbf63374f0e27793f2d87aac1c6b659fae5bcf18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mcnulty, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 26 12:50:56 np0005596060 systemd[1]: libpod-conmon-37b6f9a0d0b57e63935ff55dcbf63374f0e27793f2d87aac1c6b659fae5bcf18.scope: Deactivated successfully.
Jan 26 12:50:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:50:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:50:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ea6688dc-a667-49ab-8de3-8c521da86050 does not exist
Jan 26 12:50:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 147c64c8-be19-4e72-b18d-1f82b13f9656 does not exist
Jan 26 12:50:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 01f78722-e64f-419e-a508-b5645307dc4e does not exist
Jan 26 12:50:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:57.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:57 np0005596060 python3.9[142447]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:50:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:50:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:50:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:50:58.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:50:58 np0005596060 python3.9[142599]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:50:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:50:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 26 12:50:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:50:59.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 26 12:50:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:50:59 np0005596060 python3.9[142752]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:51:00 np0005596060 python3.9[142907]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:51:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:00.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:00 np0005596060 python3.9[143062]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:01.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:02 np0005596060 python3.9[143213]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:51:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:02.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:03.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:51:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:03 np0005596060 python3.9[143367]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:51:03 np0005596060 ovs-vsctl[143368]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 26 12:51:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:04.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:04 np0005596060 python3.9[143520]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:51:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:51:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:05.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:51:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:05 np0005596060 python3.9[143676]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:51:05 np0005596060 ovs-vsctl[143677]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 26 12:51:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:06.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:06 np0005596060 python3.9[143827]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:51:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:51:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:07.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:51:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:08.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:08 np0005596060 python3.9[143982]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:51:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:09.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:09 np0005596060 python3.9[144134]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:51:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:09 np0005596060 python3.9[144213]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:51:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:51:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:10.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:51:10 np0005596060 python3.9[144365]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:51:11 np0005596060 python3.9[144443]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:51:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:11.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 12:51:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 7965 writes, 33K keys, 7965 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7965 writes, 1561 syncs, 5.10 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7965 writes, 33K keys, 7965 commit groups, 1.0 writes per commit group, ingest: 20.99 MB, 0.03 MB/s#012Interval WAL: 7965 writes, 1561 syncs, 5.10 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556c7332c2d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556c7332c2d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 26 12:51:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:11 np0005596060 python3.9[144596]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:12.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:12 np0005596060 python3.9[144748]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:51:13 np0005596060 python3.9[144826]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:13.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:13 np0005596060 python3.9[144979]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:51:14 np0005596060 python3.9[145057]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:14.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:15.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:15 np0005596060 python3.9[145209]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:51:15 np0005596060 systemd[1]: Reloading.
Jan 26 12:51:15 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:51:15 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:51:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:16.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:17.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:17 np0005596060 python3.9[145398]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:51:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:17 np0005596060 python3.9[145527]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:51:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:18.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:51:18 np0005596060 python3.9[145679]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:51:18 np0005596060 python3.9[145757]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:19.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:20 np0005596060 python3.9[145910]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:51:20 np0005596060 systemd[1]: Reloading.
Jan 26 12:51:20 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:51:20 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:51:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:20.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:20 np0005596060 systemd[1]: Starting Create netns directory...
Jan 26 12:51:20 np0005596060 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 12:51:20 np0005596060 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 12:51:20 np0005596060 systemd[1]: Finished Create netns directory.
Jan 26 12:51:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:21.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:21 np0005596060 python3.9[146106]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:51:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:22 np0005596060 python3.9[146259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:51:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:22.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:22 np0005596060 python3.9[146382]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769449881.5926738-1365-8822574082374/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:51:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:51:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:23.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:51:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:23 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] Check health
Jan 26 12:51:23 np0005596060 python3.9[146535]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:24.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:24 np0005596060 python3.9[146687]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:51:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:25.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:25 np0005596060 python3.9[146839]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:51:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:25 np0005596060 python3.9[146963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769449884.863763-1464-75532100356784/.source.json _original_basename=.e4020pgk follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:51:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:51:26 np0005596060 python3.9[147113]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:27.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:28.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:28 np0005596060 python3.9[147537]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 26 12:51:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:29.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:30 np0005596060 python3.9[147690]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 12:51:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:30.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:51:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:31.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:51:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:31 np0005596060 python3[147843]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 12:51:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:32.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:33.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:51:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:34.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:51:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:35.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:36.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:37.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:37 np0005596060 podman[147857]: 2026-01-26 17:51:37.584519913 +0000 UTC m=+5.522929235 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 26 12:51:37 np0005596060 podman[148025]: 2026-01-26 17:51:37.725553316 +0000 UTC m=+0.053649497 container create c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 26 12:51:37 np0005596060 podman[148025]: 2026-01-26 17:51:37.695894556 +0000 UTC m=+0.023990767 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 26 12:51:37 np0005596060 python3[147843]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 26 12:51:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:51:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:51:38 np0005596060 python3.9[148215]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:51:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:51:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:39.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:51:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:39 np0005596060 python3.9[148369]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:39 np0005596060 python3.9[148446]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:51:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:40 np0005596060 python3.9[148597]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769449900.0703547-1698-236828024792230/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:41.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:41 np0005596060 python3.9[148673]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 12:51:41 np0005596060 systemd[1]: Reloading.
Jan 26 12:51:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:41 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:51:41 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:51:42 np0005596060 python3.9[148785]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:51:42 np0005596060 systemd[1]: Reloading.
Jan 26 12:51:42 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:51:42 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:51:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:51:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:42.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:51:42 np0005596060 systemd[1]: Starting ovn_controller container...
Jan 26 12:51:42 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:51:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/050a903124a700a1c9526b70a13d96388d0b261d175f1d8127bf04dd8151feac/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 26 12:51:42 np0005596060 systemd[1]: Started /usr/bin/podman healthcheck run c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1.
Jan 26 12:51:42 np0005596060 podman[148827]: 2026-01-26 17:51:42.831392471 +0000 UTC m=+0.230479725 container init c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:51:42 np0005596060 ovn_controller[148842]: + sudo -E kolla_set_configs
Jan 26 12:51:42 np0005596060 podman[148827]: 2026-01-26 17:51:42.863786689 +0000 UTC m=+0.262873943 container start c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:51:42 np0005596060 edpm-start-podman-container[148827]: ovn_controller
Jan 26 12:51:42 np0005596060 systemd[1]: Created slice User Slice of UID 0.
Jan 26 12:51:42 np0005596060 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 26 12:51:42 np0005596060 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 26 12:51:42 np0005596060 systemd[1]: Starting User Manager for UID 0...
Jan 26 12:51:42 np0005596060 edpm-start-podman-container[148826]: Creating additional drop-in dependency for "ovn_controller" (c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1)
Jan 26 12:51:42 np0005596060 podman[148849]: 2026-01-26 17:51:42.949265689 +0000 UTC m=+0.074047992 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, container_name=ovn_controller)
Jan 26 12:51:42 np0005596060 systemd[1]: c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1-17fb5077d832b1be.service: Main process exited, code=exited, status=1/FAILURE
Jan 26 12:51:42 np0005596060 systemd[1]: c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1-17fb5077d832b1be.service: Failed with result 'exit-code'.
Jan 26 12:51:42 np0005596060 systemd[1]: Reloading.
Jan 26 12:51:43 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:51:43 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:51:43 np0005596060 systemd[148878]: Queued start job for default target Main User Target.
Jan 26 12:51:43 np0005596060 systemd[148878]: Created slice User Application Slice.
Jan 26 12:51:43 np0005596060 systemd[148878]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 26 12:51:43 np0005596060 systemd[148878]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 12:51:43 np0005596060 systemd[148878]: Reached target Paths.
Jan 26 12:51:43 np0005596060 systemd[148878]: Reached target Timers.
Jan 26 12:51:43 np0005596060 systemd[148878]: Starting D-Bus User Message Bus Socket...
Jan 26 12:51:43 np0005596060 systemd[148878]: Starting Create User's Volatile Files and Directories...
Jan 26 12:51:43 np0005596060 systemd[148878]: Listening on D-Bus User Message Bus Socket.
Jan 26 12:51:43 np0005596060 systemd[148878]: Finished Create User's Volatile Files and Directories.
Jan 26 12:51:43 np0005596060 systemd[148878]: Reached target Sockets.
Jan 26 12:51:43 np0005596060 systemd[148878]: Reached target Basic System.
Jan 26 12:51:43 np0005596060 systemd[148878]: Reached target Main User Target.
Jan 26 12:51:43 np0005596060 systemd[148878]: Startup finished in 134ms.
Jan 26 12:51:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:51:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:43.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:51:43 np0005596060 systemd[1]: Started User Manager for UID 0.
Jan 26 12:51:43 np0005596060 systemd[1]: Started ovn_controller container.
Jan 26 12:51:43 np0005596060 systemd[1]: Started Session c1 of User root.
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: INFO:__main__:Validating config file
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: INFO:__main__:Writing out command to execute
Jan 26 12:51:43 np0005596060 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: ++ cat /run_command
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: + ARGS=
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: + sudo kolla_copy_cacerts
Jan 26 12:51:43 np0005596060 systemd[1]: Started Session c2 of User root.
Jan 26 12:51:43 np0005596060 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: + [[ ! -n '' ]]
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: + . kolla_extend_start
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: + umask 0022
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 26 12:51:43 np0005596060 NetworkManager[48900]: <info>  [1769449903.4066] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 26 12:51:43 np0005596060 NetworkManager[48900]: <info>  [1769449903.4082] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 12:51:43 np0005596060 NetworkManager[48900]: <warn>  [1769449903.4085] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 12:51:43 np0005596060 NetworkManager[48900]: <info>  [1769449903.4094] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 26 12:51:43 np0005596060 NetworkManager[48900]: <info>  [1769449903.4101] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 26 12:51:43 np0005596060 NetworkManager[48900]: <info>  [1769449903.4106] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 12:51:43 np0005596060 kernel: br-int: entered promiscuous mode
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 26 12:51:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:43 np0005596060 systemd-udevd[148975]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 12:51:43 np0005596060 ovn_controller[148842]: 2026-01-26T17:51:43Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 26 12:51:43 np0005596060 NetworkManager[48900]: <info>  [1769449903.6789] manager: (ovn-345392-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 26 12:51:43 np0005596060 kernel: genev_sys_6081: entered promiscuous mode
Jan 26 12:51:43 np0005596060 systemd-udevd[148977]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 12:51:43 np0005596060 NetworkManager[48900]: <info>  [1769449903.6941] device (genev_sys_6081): carrier: link connected
Jan 26 12:51:43 np0005596060 NetworkManager[48900]: <info>  [1769449903.6944] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 26 12:51:43 np0005596060 NetworkManager[48900]: <info>  [1769449903.9215] manager: (ovn-657115-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:51:43
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'images', 'vms', 'default.rgw.meta']
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:51:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:51:44 np0005596060 NetworkManager[48900]: <info>  [1769449904.3283] manager: (ovn-9838f2-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 26 12:51:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:44.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:45.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:45 np0005596060 python3.9[149106]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 12:51:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:51:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:46.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:51:46 np0005596060 python3.9[149259]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:51:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:51:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:47.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:51:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:47 np0005596060 python3.9[149382]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769449905.9977648-1833-182290604687236/.source.yaml _original_basename=.ps3t2oyr follow=False checksum=d2889da2b79efa07e6f3a0ac50ac42a16f618171 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:51:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:48.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:49 np0005596060 python3.9[149535]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:51:49 np0005596060 ovs-vsctl[149536]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 26 12:51:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:49.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:50 np0005596060 python3.9[149689]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:51:50 np0005596060 ovs-vsctl[149691]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 26 12:51:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:50.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:51 np0005596060 python3.9[149844]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:51:51 np0005596060 ovs-vsctl[149845]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 26 12:51:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:51.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:52 np0005596060 systemd-logind[786]: Session 46 logged out. Waiting for processes to exit.
Jan 26 12:51:52 np0005596060 systemd[1]: session-46.scope: Deactivated successfully.
Jan 26 12:51:52 np0005596060 systemd[1]: session-46.scope: Consumed 1min 1.296s CPU time.
Jan 26 12:51:52 np0005596060 systemd-logind[786]: Removed session 46.
Jan 26 12:51:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:52.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:53.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:53 np0005596060 systemd[1]: Stopping User Manager for UID 0...
Jan 26 12:51:53 np0005596060 systemd[148878]: Activating special unit Exit the Session...
Jan 26 12:51:53 np0005596060 systemd[148878]: Stopped target Main User Target.
Jan 26 12:51:53 np0005596060 systemd[148878]: Stopped target Basic System.
Jan 26 12:51:53 np0005596060 systemd[148878]: Stopped target Paths.
Jan 26 12:51:53 np0005596060 systemd[148878]: Stopped target Sockets.
Jan 26 12:51:53 np0005596060 systemd[148878]: Stopped target Timers.
Jan 26 12:51:53 np0005596060 systemd[148878]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 26 12:51:53 np0005596060 systemd[148878]: Closed D-Bus User Message Bus Socket.
Jan 26 12:51:53 np0005596060 systemd[148878]: Stopped Create User's Volatile Files and Directories.
Jan 26 12:51:53 np0005596060 systemd[148878]: Removed slice User Application Slice.
Jan 26 12:51:53 np0005596060 systemd[148878]: Reached target Shutdown.
Jan 26 12:51:53 np0005596060 systemd[148878]: Finished Exit the Session.
Jan 26 12:51:53 np0005596060 systemd[148878]: Reached target Exit the Session.
Jan 26 12:51:53 np0005596060 systemd[1]: user@0.service: Deactivated successfully.
Jan 26 12:51:53 np0005596060 systemd[1]: Stopped User Manager for UID 0.
Jan 26 12:51:53 np0005596060 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 26 12:51:53 np0005596060 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 26 12:51:53 np0005596060 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 26 12:51:53 np0005596060 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 26 12:51:53 np0005596060 systemd[1]: Removed slice User Slice of UID 0.
Jan 26 12:51:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:54.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:51:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:55.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:51:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:51:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:56.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:57.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:51:57 np0005596060 systemd-logind[786]: New session 48 of user zuul.
Jan 26 12:51:57 np0005596060 systemd[1]: Started Session 48 of User zuul.
Jan 26 12:51:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 12:51:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 26 12:51:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 12:51:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 26 12:51:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 12:51:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:51:58.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:51:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:51:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:51:59.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:51:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:00 np0005596060 python3.9[150210]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:52:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:00.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:01.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:01 np0005596060 python3.9[150372]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:02 np0005596060 python3.9[150526]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.440070) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449922440316, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1740, "num_deletes": 251, "total_data_size": 3301915, "memory_usage": 3343184, "flush_reason": "Manual Compaction"}
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 26 12:52:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:02.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449922484058, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3243652, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10609, "largest_seqno": 12348, "table_properties": {"data_size": 3235591, "index_size": 4940, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15558, "raw_average_key_size": 19, "raw_value_size": 3219785, "raw_average_value_size": 4044, "num_data_blocks": 221, "num_entries": 796, "num_filter_entries": 796, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449721, "oldest_key_time": 1769449721, "file_creation_time": 1769449922, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 44008 microseconds, and 10189 cpu microseconds.
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.484110) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3243652 bytes OK
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.484129) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.486024) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.486037) EVENT_LOG_v1 {"time_micros": 1769449922486033, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.486055) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3294836, prev total WAL file size 3312100, number of live WAL files 2.
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.516687) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3167KB)], [26(7628KB)]
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449922516856, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 11055502, "oldest_snapshot_seqno": -1}
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4012 keys, 8812513 bytes, temperature: kUnknown
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449922593547, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8812513, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8782403, "index_size": 18992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97441, "raw_average_key_size": 24, "raw_value_size": 8706696, "raw_average_value_size": 2170, "num_data_blocks": 820, "num_entries": 4012, "num_filter_entries": 4012, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769449922, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.593818) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8812513 bytes
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.595145) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.0 rd, 114.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.4 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(6.1) write-amplify(2.7) OK, records in: 4531, records dropped: 519 output_compression: NoCompression
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.595164) EVENT_LOG_v1 {"time_micros": 1769449922595155, "job": 10, "event": "compaction_finished", "compaction_time_micros": 76793, "compaction_time_cpu_micros": 29535, "output_level": 6, "num_output_files": 1, "total_output_size": 8812513, "num_input_records": 4531, "num_output_records": 4012, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449922596155, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449922597962, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.516539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.597990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.597994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.597995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.597997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:52:02.597998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 26 12:52:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:52:02 np0005596060 python3.9[150678]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:52:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:03.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:03 np0005596060 python3.9[150831]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:52:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:52:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:52:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:52:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:52:04 np0005596060 python3.9[150983]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:04.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:52:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5ba25fe5-a979-46b4-9c35-f63d9eadc53c does not exist
Jan 26 12:52:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 14efa3b2-2d18-43e1-b7fb-cdba598eb16f does not exist
Jan 26 12:52:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9827ab8b-3d1a-42ae-8918-eed25a7a3f6e does not exist
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:52:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:52:05 np0005596060 podman[151202]: 2026-01-26 17:52:05.20426845 +0000 UTC m=+0.023145196 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:52:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:05.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:05 np0005596060 python3.9[151290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:52:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:06.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:52:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:52:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:52:06 np0005596060 podman[151202]: 2026-01-26 17:52:06.580833392 +0000 UTC m=+1.399710148 container create bec7be56f59f254de17897e1fbddb4b82773f42d906a2d4e81cb987994b776a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 12:52:06 np0005596060 systemd[1]: Started libpod-conmon-bec7be56f59f254de17897e1fbddb4b82773f42d906a2d4e81cb987994b776a3.scope.
Jan 26 12:52:06 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:52:06 np0005596060 podman[151202]: 2026-01-26 17:52:06.676630123 +0000 UTC m=+1.495506869 container init bec7be56f59f254de17897e1fbddb4b82773f42d906a2d4e81cb987994b776a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 12:52:06 np0005596060 podman[151202]: 2026-01-26 17:52:06.68601316 +0000 UTC m=+1.504889876 container start bec7be56f59f254de17897e1fbddb4b82773f42d906a2d4e81cb987994b776a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_fermi, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:52:06 np0005596060 podman[151202]: 2026-01-26 17:52:06.690120024 +0000 UTC m=+1.508996760 container attach bec7be56f59f254de17897e1fbddb4b82773f42d906a2d4e81cb987994b776a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_fermi, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 26 12:52:06 np0005596060 condescending_fermi[151340]: 167 167
Jan 26 12:52:06 np0005596060 systemd[1]: libpod-bec7be56f59f254de17897e1fbddb4b82773f42d906a2d4e81cb987994b776a3.scope: Deactivated successfully.
Jan 26 12:52:06 np0005596060 podman[151202]: 2026-01-26 17:52:06.69630376 +0000 UTC m=+1.515180516 container died bec7be56f59f254de17897e1fbddb4b82773f42d906a2d4e81cb987994b776a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_fermi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:52:06 np0005596060 systemd[1]: var-lib-containers-storage-overlay-99e7c77376c8ae8561b7c2a3f30d0e148cf2e23a9eebf3b1e8f003dada3090bc-merged.mount: Deactivated successfully.
Jan 26 12:52:06 np0005596060 podman[151202]: 2026-01-26 17:52:06.744254042 +0000 UTC m=+1.563131008 container remove bec7be56f59f254de17897e1fbddb4b82773f42d906a2d4e81cb987994b776a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 12:52:06 np0005596060 systemd[1]: libpod-conmon-bec7be56f59f254de17897e1fbddb4b82773f42d906a2d4e81cb987994b776a3.scope: Deactivated successfully.
Jan 26 12:52:06 np0005596060 podman[151364]: 2026-01-26 17:52:06.916574616 +0000 UTC m=+0.047127832 container create 387968529e6024120be3864ee8a716c9cc48a045a3096c5881a73cae0cfdb3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:52:06 np0005596060 systemd[1]: Started libpod-conmon-387968529e6024120be3864ee8a716c9cc48a045a3096c5881a73cae0cfdb3f3.scope.
Jan 26 12:52:06 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:52:06 np0005596060 podman[151364]: 2026-01-26 17:52:06.896685794 +0000 UTC m=+0.027239040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:52:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a3aab440b01e336176162a467f408ca6eab1fba927f7a5de8a7d6cde6103a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a3aab440b01e336176162a467f408ca6eab1fba927f7a5de8a7d6cde6103a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a3aab440b01e336176162a467f408ca6eab1fba927f7a5de8a7d6cde6103a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a3aab440b01e336176162a467f408ca6eab1fba927f7a5de8a7d6cde6103a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:06 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a3aab440b01e336176162a467f408ca6eab1fba927f7a5de8a7d6cde6103a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:07 np0005596060 podman[151364]: 2026-01-26 17:52:07.003955154 +0000 UTC m=+0.134508400 container init 387968529e6024120be3864ee8a716c9cc48a045a3096c5881a73cae0cfdb3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 12:52:07 np0005596060 podman[151364]: 2026-01-26 17:52:07.010582231 +0000 UTC m=+0.141135447 container start 387968529e6024120be3864ee8a716c9cc48a045a3096c5881a73cae0cfdb3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:52:07 np0005596060 podman[151364]: 2026-01-26 17:52:07.015694811 +0000 UTC m=+0.146248037 container attach 387968529e6024120be3864ee8a716c9cc48a045a3096c5881a73cae0cfdb3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:52:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:07.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:07 np0005596060 python3.9[151491]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 26 12:52:07 np0005596060 sweet_galois[151381]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:52:07 np0005596060 sweet_galois[151381]: --> relative data size: 1.0
Jan 26 12:52:07 np0005596060 sweet_galois[151381]: --> All data devices are unavailable
Jan 26 12:52:07 np0005596060 systemd[1]: libpod-387968529e6024120be3864ee8a716c9cc48a045a3096c5881a73cae0cfdb3f3.scope: Deactivated successfully.
Jan 26 12:52:07 np0005596060 podman[151364]: 2026-01-26 17:52:07.953801635 +0000 UTC m=+1.084354851 container died 387968529e6024120be3864ee8a716c9cc48a045a3096c5881a73cae0cfdb3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 12:52:07 np0005596060 systemd[1]: var-lib-containers-storage-overlay-67a3aab440b01e336176162a467f408ca6eab1fba927f7a5de8a7d6cde6103a9-merged.mount: Deactivated successfully.
Jan 26 12:52:08 np0005596060 podman[151364]: 2026-01-26 17:52:08.02681884 +0000 UTC m=+1.157372056 container remove 387968529e6024120be3864ee8a716c9cc48a045a3096c5881a73cae0cfdb3f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:52:08 np0005596060 systemd[1]: libpod-conmon-387968529e6024120be3864ee8a716c9cc48a045a3096c5881a73cae0cfdb3f3.scope: Deactivated successfully.
Jan 26 12:52:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:08.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:08 np0005596060 podman[151653]: 2026-01-26 17:52:08.630938695 +0000 UTC m=+0.024339306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:52:08 np0005596060 podman[151653]: 2026-01-26 17:52:08.950028128 +0000 UTC m=+0.343428759 container create 9e66375e96ffdc28d9ab6b6d486e93e716c0833afddc7014b126721fc446f718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:52:09 np0005596060 systemd[1]: Started libpod-conmon-9e66375e96ffdc28d9ab6b6d486e93e716c0833afddc7014b126721fc446f718.scope.
Jan 26 12:52:09 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:52:09 np0005596060 podman[151653]: 2026-01-26 17:52:09.352785845 +0000 UTC m=+0.746186456 container init 9e66375e96ffdc28d9ab6b6d486e93e716c0833afddc7014b126721fc446f718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 12:52:09 np0005596060 podman[151653]: 2026-01-26 17:52:09.363127666 +0000 UTC m=+0.756528267 container start 9e66375e96ffdc28d9ab6b6d486e93e716c0833afddc7014b126721fc446f718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:52:09 np0005596060 intelligent_brahmagupta[151814]: 167 167
Jan 26 12:52:09 np0005596060 systemd[1]: libpod-9e66375e96ffdc28d9ab6b6d486e93e716c0833afddc7014b126721fc446f718.scope: Deactivated successfully.
Jan 26 12:52:09 np0005596060 podman[151653]: 2026-01-26 17:52:09.373702763 +0000 UTC m=+0.767103374 container attach 9e66375e96ffdc28d9ab6b6d486e93e716c0833afddc7014b126721fc446f718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brahmagupta, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:52:09 np0005596060 podman[151653]: 2026-01-26 17:52:09.374887033 +0000 UTC m=+0.768287634 container died 9e66375e96ffdc28d9ab6b6d486e93e716c0833afddc7014b126721fc446f718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 26 12:52:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8cdee608a417477f23d9c7b836bbe9faa72bf965f7d960abdc6055a7c31af361-merged.mount: Deactivated successfully.
Jan 26 12:52:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:09.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:09 np0005596060 python3.9[151822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:09 np0005596060 podman[151653]: 2026-01-26 17:52:09.557312453 +0000 UTC m=+0.950713044 container remove 9e66375e96ffdc28d9ab6b6d486e93e716c0833afddc7014b126721fc446f718 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_brahmagupta, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:52:09 np0005596060 systemd[1]: libpod-conmon-9e66375e96ffdc28d9ab6b6d486e93e716c0833afddc7014b126721fc446f718.scope: Deactivated successfully.
Jan 26 12:52:09 np0005596060 podman[151892]: 2026-01-26 17:52:09.751322745 +0000 UTC m=+0.046665920 container create bdeb2062791ce1569394ccb8447a8ff43f0dc93244f83ea258c93f3982fac8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:52:09 np0005596060 systemd[1]: Started libpod-conmon-bdeb2062791ce1569394ccb8447a8ff43f0dc93244f83ea258c93f3982fac8bc.scope.
Jan 26 12:52:09 np0005596060 podman[151892]: 2026-01-26 17:52:09.733200467 +0000 UTC m=+0.028543672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:52:09 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:52:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cef7b0d928c60917f10d7a0a60eb9fcbe74d631238a6e9b037e2685c78653c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cef7b0d928c60917f10d7a0a60eb9fcbe74d631238a6e9b037e2685c78653c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cef7b0d928c60917f10d7a0a60eb9fcbe74d631238a6e9b037e2685c78653c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cef7b0d928c60917f10d7a0a60eb9fcbe74d631238a6e9b037e2685c78653c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:09 np0005596060 podman[151892]: 2026-01-26 17:52:09.841479592 +0000 UTC m=+0.136822787 container init bdeb2062791ce1569394ccb8447a8ff43f0dc93244f83ea258c93f3982fac8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 12:52:09 np0005596060 podman[151892]: 2026-01-26 17:52:09.850822868 +0000 UTC m=+0.146166043 container start bdeb2062791ce1569394ccb8447a8ff43f0dc93244f83ea258c93f3982fac8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:52:09 np0005596060 podman[151892]: 2026-01-26 17:52:09.854609434 +0000 UTC m=+0.149952639 container attach bdeb2062791ce1569394ccb8447a8ff43f0dc93244f83ea258c93f3982fac8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:52:10 np0005596060 python3.9[151986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769449928.7931614-218-241632963365241/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:10.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]: {
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:    "1": [
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:        {
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "devices": [
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "/dev/loop3"
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            ],
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "lv_name": "ceph_lv0",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "lv_size": "7511998464",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "name": "ceph_lv0",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "tags": {
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.cluster_name": "ceph",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.crush_device_class": "",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.encrypted": "0",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.osd_id": "1",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.type": "block",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:                "ceph.vdo": "0"
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            },
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "type": "block",
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:            "vg_name": "ceph_vg0"
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:        }
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]:    ]
Jan 26 12:52:10 np0005596060 ecstatic_chaum[151937]: }
Jan 26 12:52:10 np0005596060 podman[151892]: 2026-01-26 17:52:10.702683923 +0000 UTC m=+0.998027098 container died bdeb2062791ce1569394ccb8447a8ff43f0dc93244f83ea258c93f3982fac8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 26 12:52:10 np0005596060 systemd[1]: libpod-bdeb2062791ce1569394ccb8447a8ff43f0dc93244f83ea258c93f3982fac8bc.scope: Deactivated successfully.
Jan 26 12:52:10 np0005596060 systemd[1]: var-lib-containers-storage-overlay-5cef7b0d928c60917f10d7a0a60eb9fcbe74d631238a6e9b037e2685c78653c3-merged.mount: Deactivated successfully.
Jan 26 12:52:10 np0005596060 podman[151892]: 2026-01-26 17:52:10.772333353 +0000 UTC m=+1.067676528 container remove bdeb2062791ce1569394ccb8447a8ff43f0dc93244f83ea258c93f3982fac8bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:52:10 np0005596060 systemd[1]: libpod-conmon-bdeb2062791ce1569394ccb8447a8ff43f0dc93244f83ea258c93f3982fac8bc.scope: Deactivated successfully.
Jan 26 12:52:10 np0005596060 python3.9[152154]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:11 np0005596060 podman[152417]: 2026-01-26 17:52:11.413048253 +0000 UTC m=+0.046118386 container create 9dc4531ff626d994884fccb79f4fe10d491197c6057ce4ab65370799e0d64450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 12:52:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:11.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:11 np0005596060 systemd[1]: Started libpod-conmon-9dc4531ff626d994884fccb79f4fe10d491197c6057ce4ab65370799e0d64450.scope.
Jan 26 12:52:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:52:11 np0005596060 podman[152417]: 2026-01-26 17:52:11.391520309 +0000 UTC m=+0.024590462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:52:11 np0005596060 podman[152417]: 2026-01-26 17:52:11.49640843 +0000 UTC m=+0.129478573 container init 9dc4531ff626d994884fccb79f4fe10d491197c6057ce4ab65370799e0d64450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 12:52:11 np0005596060 python3.9[152409]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769449930.4156852-263-234198879123781/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:11 np0005596060 podman[152417]: 2026-01-26 17:52:11.503717054 +0000 UTC m=+0.136787177 container start 9dc4531ff626d994884fccb79f4fe10d491197c6057ce4ab65370799e0d64450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:52:11 np0005596060 podman[152417]: 2026-01-26 17:52:11.508856784 +0000 UTC m=+0.141926937 container attach 9dc4531ff626d994884fccb79f4fe10d491197c6057ce4ab65370799e0d64450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 12:52:11 np0005596060 modest_clarke[152434]: 167 167
Jan 26 12:52:11 np0005596060 systemd[1]: libpod-9dc4531ff626d994884fccb79f4fe10d491197c6057ce4ab65370799e0d64450.scope: Deactivated successfully.
Jan 26 12:52:11 np0005596060 podman[152417]: 2026-01-26 17:52:11.510847774 +0000 UTC m=+0.143917907 container died 9dc4531ff626d994884fccb79f4fe10d491197c6057ce4ab65370799e0d64450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 12:52:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4c3400bd826fd552feb0447a869d2d6051be6c64c6b145b236defd4f08067e7d-merged.mount: Deactivated successfully.
Jan 26 12:52:11 np0005596060 podman[152417]: 2026-01-26 17:52:11.558801406 +0000 UTC m=+0.191871539 container remove 9dc4531ff626d994884fccb79f4fe10d491197c6057ce4ab65370799e0d64450 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:52:11 np0005596060 systemd[1]: libpod-conmon-9dc4531ff626d994884fccb79f4fe10d491197c6057ce4ab65370799e0d64450.scope: Deactivated successfully.
Jan 26 12:52:11 np0005596060 podman[152483]: 2026-01-26 17:52:11.717459665 +0000 UTC m=+0.038559745 container create a1850e730b402aa77b87aad88959823d8e4fbc76353fee4791025e8e680cc786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:52:11 np0005596060 systemd[1]: Started libpod-conmon-a1850e730b402aa77b87aad88959823d8e4fbc76353fee4791025e8e680cc786.scope.
Jan 26 12:52:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:52:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da11a49aeb068133d00b08ba44b5e9b4bffb44e6c6ebe91e2acc9503a8cee6f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da11a49aeb068133d00b08ba44b5e9b4bffb44e6c6ebe91e2acc9503a8cee6f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da11a49aeb068133d00b08ba44b5e9b4bffb44e6c6ebe91e2acc9503a8cee6f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da11a49aeb068133d00b08ba44b5e9b4bffb44e6c6ebe91e2acc9503a8cee6f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:52:11 np0005596060 podman[152483]: 2026-01-26 17:52:11.701585524 +0000 UTC m=+0.022685634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:52:11 np0005596060 podman[152483]: 2026-01-26 17:52:11.800918704 +0000 UTC m=+0.122018804 container init a1850e730b402aa77b87aad88959823d8e4fbc76353fee4791025e8e680cc786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:52:11 np0005596060 podman[152483]: 2026-01-26 17:52:11.807709966 +0000 UTC m=+0.128810046 container start a1850e730b402aa77b87aad88959823d8e4fbc76353fee4791025e8e680cc786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 12:52:11 np0005596060 podman[152483]: 2026-01-26 17:52:11.811954493 +0000 UTC m=+0.133054603 container attach a1850e730b402aa77b87aad88959823d8e4fbc76353fee4791025e8e680cc786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:52:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:12.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:12 np0005596060 python3.9[152632]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:52:12 np0005596060 clever_murdock[152500]: {
Jan 26 12:52:12 np0005596060 clever_murdock[152500]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:52:12 np0005596060 clever_murdock[152500]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:52:12 np0005596060 clever_murdock[152500]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:52:12 np0005596060 clever_murdock[152500]:        "osd_id": 1,
Jan 26 12:52:12 np0005596060 clever_murdock[152500]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:52:12 np0005596060 clever_murdock[152500]:        "type": "bluestore"
Jan 26 12:52:12 np0005596060 clever_murdock[152500]:    }
Jan 26 12:52:12 np0005596060 clever_murdock[152500]: }
Jan 26 12:52:12 np0005596060 systemd[1]: libpod-a1850e730b402aa77b87aad88959823d8e4fbc76353fee4791025e8e680cc786.scope: Deactivated successfully.
Jan 26 12:52:12 np0005596060 podman[152483]: 2026-01-26 17:52:12.743982214 +0000 UTC m=+1.065082294 container died a1850e730b402aa77b87aad88959823d8e4fbc76353fee4791025e8e680cc786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Jan 26 12:52:12 np0005596060 systemd[1]: var-lib-containers-storage-overlay-da11a49aeb068133d00b08ba44b5e9b4bffb44e6c6ebe91e2acc9503a8cee6f1-merged.mount: Deactivated successfully.
Jan 26 12:52:12 np0005596060 podman[152483]: 2026-01-26 17:52:12.977488614 +0000 UTC m=+1.298588694 container remove a1850e730b402aa77b87aad88959823d8e4fbc76353fee4791025e8e680cc786 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_murdock, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:52:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:52:13 np0005596060 systemd[1]: libpod-conmon-a1850e730b402aa77b87aad88959823d8e4fbc76353fee4791025e8e680cc786.scope: Deactivated successfully.
Jan 26 12:52:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:52:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:52:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:52:13 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 787fceb0-907d-4b82-bda0-b9b69a3b0e79 does not exist
Jan 26 12:52:13 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9cf6eacf-1bc8-4026-bd25-b7ea9efd9f71 does not exist
Jan 26 12:52:13 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6394f376-533d-4d84-804a-01d416772ace does not exist
Jan 26 12:52:13 np0005596060 ovn_controller[148842]: 2026-01-26T17:52:13Z|00025|memory|INFO|17280 kB peak resident set size after 29.7 seconds
Jan 26 12:52:13 np0005596060 ovn_controller[148842]: 2026-01-26T17:52:13Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 26 12:52:13 np0005596060 podman[152669]: 2026-01-26 17:52:13.136715057 +0000 UTC m=+0.094835817 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 12:52:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:13.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:13 np0005596060 python3.9[152820]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:52:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:52:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:52:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:14.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:15.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:16 np0005596060 python3.9[152975]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 12:52:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:16.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:18 np0005596060 python3.9[153179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:18.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:18 np0005596060 python3.9[153300]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769449937.3477354-374-23923986664692/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:52:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:19.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:52:19 np0005596060 python3.9[153451]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:20 np0005596060 python3.9[153572]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769449939.1027675-374-177945592456840/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:52:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:20.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:52:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:21.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:21 np0005596060 python3.9[153723]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:22 np0005596060 python3.9[153844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769449941.2199318-506-128201745028435/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:52:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:22.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:52:22 np0005596060 python3.9[153994]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:23.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:23 np0005596060 python3.9[154115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769449942.4598956-506-83281031398923/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:24 np0005596060 python3.9[154266]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:52:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:52:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:24.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:52:25 np0005596060 python3.9[154420]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:25.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:26 np0005596060 python3.9[154573]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:26.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:26 np0005596060 python3.9[154651]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:27.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:28 np0005596060 python3.9[154803]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:28.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:28 np0005596060 python3.9[154882]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:29 np0005596060 python3.9[155034]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:52:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:29.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:30 np0005596060 python3.9[155187]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:30.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:30 np0005596060 python3.9[155265]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:52:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:31 np0005596060 python3.9[155417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:31.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:31 np0005596060 python3.9[155496]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:52:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:32.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:32 np0005596060 python3.9[155648]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:52:32 np0005596060 systemd[1]: Reloading.
Jan 26 12:52:32 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:52:32 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:52:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:33.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:34 np0005596060 python3.9[155839]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:34 np0005596060 python3.9[155917]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:52:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:52:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:34.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:52:35 np0005596060 python3.9[156069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:35.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:35 np0005596060 python3.9[156148]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:52:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:36.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:36 np0005596060 python3.9[156300]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:52:36 np0005596060 systemd[1]: Reloading.
Jan 26 12:52:36 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:52:36 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:52:37 np0005596060 systemd[1]: Starting Create netns directory...
Jan 26 12:52:37 np0005596060 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 26 12:52:37 np0005596060 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 26 12:52:37 np0005596060 systemd[1]: Finished Create netns directory.
Jan 26 12:52:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:37.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:38 np0005596060 python3.9[156545]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:38.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:39 np0005596060 python3.9[156697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:39.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:39 np0005596060 python3.9[156821]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769449958.462677-959-105201595705125/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:40.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:40 np0005596060 python3.9[156973]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:52:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:41.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:41 np0005596060 python3.9[157126]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:52:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:42.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:42 np0005596060 python3.9[157278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:52:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:43 np0005596060 python3.9[157401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769449961.973664-1058-276443940591489/.source.json _original_basename=.3kv2xfnp follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:52:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:43.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:43 np0005596060 podman[157523]: 2026-01-26 17:52:43.944770122 +0000 UTC m=+0.140807485 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:52:44
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'images', 'volumes', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control']
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:52:44 np0005596060 python3.9[157562]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:52:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:52:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:44.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:45.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:46 np0005596060 python3.9[158002]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 26 12:52:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:46.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:47 np0005596060 python3.9[158155]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 12:52:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:52:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:48.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:52:49 np0005596060 python3[158307]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 12:52:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:49.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:50.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:52:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:51.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:52.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 26 12:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 26 12:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 26 12:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 26 12:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 26 12:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 26 12:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 26 12:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 26 12:52:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 26 12:52:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:53.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:52:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:54.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:52:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 26 12:52:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:52:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:55.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:52:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:52:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:56.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:52:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Jan 26 12:52:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:57.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:52:58.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:52:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:52:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Jan 26 12:52:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:52:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:52:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:52:59.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:00 np0005596060 podman[158323]: 2026-01-26 17:53:00.392536079 +0000 UTC m=+10.955639293 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 12:53:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:53:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:00.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:53:00 np0005596060 podman[158497]: 2026-01-26 17:53:00.517455494 +0000 UTC m=+0.030597619 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 12:53:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Jan 26 12:53:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:01.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:02 np0005596060 podman[158497]: 2026-01-26 17:53:02.119126978 +0000 UTC m=+1.632269093 container create 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 26 12:53:02 np0005596060 python3[158307]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 12:53:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:02.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:02 np0005596060 python3.9[158694]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:53:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Jan 26 12:53:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:53:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:03.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:53:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:03 np0005596060 python3.9[158849]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:04 np0005596060 python3.9[158925]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:53:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:53:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:04.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:53:05 np0005596060 python3.9[159076]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769449984.341637-1292-158248814304810/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 12:53:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:53:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:05.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:53:05 np0005596060 python3.9[159152]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 12:53:05 np0005596060 systemd[1]: Reloading.
Jan 26 12:53:05 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:53:05 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:53:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:06.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 0 B/s wr, 76 op/s
Jan 26 12:53:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:07.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:08.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Jan 26 12:53:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:09.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:10 np0005596060 python3.9[159264]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:53:10 np0005596060 systemd[1]: Reloading.
Jan 26 12:53:10 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:53:10 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:53:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:10.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:10 np0005596060 systemd[1]: Starting ovn_metadata_agent container...
Jan 26 12:53:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 47 op/s
Jan 26 12:53:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:11.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:53:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edfbdb008249f3631c2bb05e14244359e029043b048e5ae69b3becb2890f4093/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edfbdb008249f3631c2bb05e14244359e029043b048e5ae69b3becb2890f4093/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:12 np0005596060 systemd[1]: Started /usr/bin/podman healthcheck run 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d.
Jan 26 12:53:12 np0005596060 podman[159309]: 2026-01-26 17:53:12.494686091 +0000 UTC m=+1.566262528 container init 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: + sudo -E kolla_set_configs
Jan 26 12:53:12 np0005596060 podman[159309]: 2026-01-26 17:53:12.531300546 +0000 UTC m=+1.602876963 container start 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 12:53:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:53:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:12.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Validating config file
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Copying service configuration files
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Writing out command to execute
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: ++ cat /run_command
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: + CMD=neutron-ovn-metadata-agent
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: + ARGS=
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: + sudo kolla_copy_cacerts
Jan 26 12:53:12 np0005596060 edpm-start-podman-container[159309]: ovn_metadata_agent
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: Running command: 'neutron-ovn-metadata-agent'
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: + [[ ! -n '' ]]
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: + . kolla_extend_start
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: + umask 0022
Jan 26 12:53:12 np0005596060 ovn_metadata_agent[159326]: + exec neutron-ovn-metadata-agent
Jan 26 12:53:12 np0005596060 podman[159333]: 2026-01-26 17:53:12.653406323 +0000 UTC m=+0.111380225 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 26 12:53:12 np0005596060 edpm-start-podman-container[159308]: Creating additional drop-in dependency for "ovn_metadata_agent" (60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d)
Jan 26 12:53:12 np0005596060 systemd[1]: Reloading.
Jan 26 12:53:12 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:53:12 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:53:13 np0005596060 systemd[1]: Started ovn_metadata_agent container.
Jan 26 12:53:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Jan 26 12:53:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:13.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:13 np0005596060 python3.9[159643]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 26 12:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:53:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:14.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.674 159331 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.674 159331 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.674 159331 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.675 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.675 159331 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.675 159331 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.675 159331 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.675 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.676 159331 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.676 159331 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.676 159331 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.676 159331 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.676 159331 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.676 159331 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.676 159331 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.676 159331 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.676 159331 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.677 159331 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.677 159331 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.677 159331 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.677 159331 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.677 159331 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.677 159331 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.678 159331 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.678 159331 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.678 159331 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.678 159331 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.678 159331 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.678 159331 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.678 159331 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.678 159331 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.679 159331 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.679 159331 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.679 159331 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.679 159331 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.679 159331 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.679 159331 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.679 159331 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.679 159331 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.680 159331 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.681 159331 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.681 159331 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.681 159331 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.681 159331 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.681 159331 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.681 159331 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.681 159331 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.681 159331 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.681 159331 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.681 159331 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.682 159331 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.682 159331 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.682 159331 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.682 159331 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.682 159331 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.682 159331 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.682 159331 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.682 159331 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.682 159331 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.682 159331 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.683 159331 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.683 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.683 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.683 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.683 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.683 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.683 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.683 159331 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.683 159331 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.684 159331 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.684 159331 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.684 159331 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.684 159331 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.684 159331 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.684 159331 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.684 159331 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.684 159331 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.684 159331 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.684 159331 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.685 159331 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.686 159331 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.686 159331 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.686 159331 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.686 159331 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.686 159331 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.686 159331 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.686 159331 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.686 159331 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.686 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.686 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.687 159331 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.687 159331 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.687 159331 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.687 159331 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.687 159331 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.687 159331 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.687 159331 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.687 159331 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.687 159331 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.688 159331 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.688 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.688 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.688 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.688 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.688 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.688 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.688 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.688 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.689 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.689 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.689 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.689 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.689 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.689 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.689 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.689 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.689 159331 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.690 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.690 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.690 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.690 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.690 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.690 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.690 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.690 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.690 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.691 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.691 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.691 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.691 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.691 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.691 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.691 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.691 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.691 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.691 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.692 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.692 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.692 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.692 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.692 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.692 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.692 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.692 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.692 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.693 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.693 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.693 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.693 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.693 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.693 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.693 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.693 159331 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.693 159331 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.694 159331 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.694 159331 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.694 159331 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.694 159331 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.694 159331 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.694 159331 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.694 159331 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.694 159331 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.694 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.694 159331 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.695 159331 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.695 159331 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.695 159331 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.695 159331 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.695 159331 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.695 159331 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.695 159331 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.695 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.695 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.696 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.696 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.696 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.696 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.696 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.696 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.696 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.696 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.696 159331 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.696 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.697 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.697 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.697 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.697 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.697 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.697 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.697 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.697 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.697 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.698 159331 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.698 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.698 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.698 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.698 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.698 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.698 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.698 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.698 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.699 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.699 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.699 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.699 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.699 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.699 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.699 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.699 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.699 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.700 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.700 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.700 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.700 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.700 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.700 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.700 159331 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.701 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.701 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.701 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.701 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.701 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.701 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.701 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.702 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.702 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.702 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.702 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.702 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.702 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.702 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.702 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.703 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.703 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.703 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.703 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.703 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.703 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.703 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.703 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.704 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.704 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.704 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.704 159331 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.704 159331 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.704 159331 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.704 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.704 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.705 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.705 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.705 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.705 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.705 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.705 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.705 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.705 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.706 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.706 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.706 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.706 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.706 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.706 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.706 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.706 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.707 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.707 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.707 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.707 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.707 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.707 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.707 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.707 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.708 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.708 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.708 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.708 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.708 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.708 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.708 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.708 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.709 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.709 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.709 159331 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.709 159331 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.719 159331 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.720 159331 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.720 159331 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.720 159331 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.720 159331 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.734 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c76f2593-4bbb-4cef-b447-9e180245ada6 (UUID: c76f2593-4bbb-4cef-b447-9e180245ada6) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.754 159331 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.755 159331 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.755 159331 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.755 159331 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.759 159331 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.765 159331 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.776 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c76f2593-4bbb-4cef-b447-9e180245ada6'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], external_ids={}, name=c76f2593-4bbb-4cef-b447-9e180245ada6, nb_cfg_timestamp=1769449911432, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.778 159331 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fa9b0ad0af0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.779 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.780 159331 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.780 159331 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.780 159331 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.786 159331 DEBUG oslo_service.service [-] Started child 159861 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.791 159331 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpn4jf_j29/privsep.sock']#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.792 159861 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-518535'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 26 12:53:14 np0005596060 podman[159808]: 2026-01-26 17:53:14.837663494 +0000 UTC m=+0.100345885 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:53:14 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 85a3bc69-fa16-415d-ad26-9e183bb6c237 does not exist
Jan 26 12:53:14 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 2ec425df-30b9-4a36-aeee-737b90e7e356 does not exist
Jan 26 12:53:14 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 811c9634-1f80-470e-ad7e-0a94404bfa68 does not exist
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:53:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.962 159861 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.963 159861 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.963 159861 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.967 159861 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.974 159861 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 26 12:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:14.980 159861 INFO eventlet.wsgi.server [-] (159861) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 26 12:53:15 np0005596060 python3.9[159873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:53:15 np0005596060 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 26 12:53:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Jan 26 12:53:15 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:15.552 159331 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 26 12:53:15 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:15.553 159331 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpn4jf_j29/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 26 12:53:15 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:15.389 160107 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 26 12:53:15 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:15.396 160107 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 26 12:53:15 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:15.399 160107 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 26 12:53:15 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:15.400 160107 INFO oslo.privsep.daemon [-] privsep daemon running as pid 160107#033[00m
Jan 26 12:53:15 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:15.557 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[8d0e5022-8bce-4111-9e7f-740b3b57b7c0]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 12:53:15 np0005596060 podman[160145]: 2026-01-26 17:53:15.55950725 +0000 UTC m=+0.072979366 container create f67ce9667fba43a507bdeb29fa8b9ff4584e3c6cb0ccb00255c03cc623c87f07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_newton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 12:53:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:15.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 12:53:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:53:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:53:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:53:15 np0005596060 podman[160145]: 2026-01-26 17:53:15.519721946 +0000 UTC m=+0.033194082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:53:15 np0005596060 systemd[1]: Started libpod-conmon-f67ce9667fba43a507bdeb29fa8b9ff4584e3c6cb0ccb00255c03cc623c87f07.scope.
Jan 26 12:53:15 np0005596060 python3.9[160139]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769449994.5465012-1427-145366414546694/.source.yaml _original_basename=.k8cna_id follow=False checksum=2750f4fe1239f577d6b91723132df62bfa8e4395 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:15 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:53:15 np0005596060 podman[160145]: 2026-01-26 17:53:15.79994455 +0000 UTC m=+0.313416686 container init f67ce9667fba43a507bdeb29fa8b9ff4584e3c6cb0ccb00255c03cc623c87f07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:53:15 np0005596060 podman[160145]: 2026-01-26 17:53:15.808717355 +0000 UTC m=+0.322189471 container start f67ce9667fba43a507bdeb29fa8b9ff4584e3c6cb0ccb00255c03cc623c87f07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_newton, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 12:53:15 np0005596060 systemd[1]: libpod-f67ce9667fba43a507bdeb29fa8b9ff4584e3c6cb0ccb00255c03cc623c87f07.scope: Deactivated successfully.
Jan 26 12:53:15 np0005596060 conmon[160164]: conmon f67ce9667fba43a507bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f67ce9667fba43a507bdeb29fa8b9ff4584e3c6cb0ccb00255c03cc623c87f07.scope/container/memory.events
Jan 26 12:53:15 np0005596060 unruffled_newton[160164]: 167 167
Jan 26 12:53:16 np0005596060 podman[160145]: 2026-01-26 17:53:16.01132127 +0000 UTC m=+0.524793426 container attach f67ce9667fba43a507bdeb29fa8b9ff4584e3c6cb0ccb00255c03cc623c87f07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_newton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 12:53:16 np0005596060 podman[160145]: 2026-01-26 17:53:16.012253423 +0000 UTC m=+0.525725579 container died f67ce9667fba43a507bdeb29fa8b9ff4584e3c6cb0ccb00255c03cc623c87f07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_newton, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 12:53:16 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e7d23d7a35f465a17077981c05fcfc82167c06adea8452b523af0a408c711d09-merged.mount: Deactivated successfully.
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.100 160107 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.100 160107 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.100 160107 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 12:53:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:16 np0005596060 podman[160145]: 2026-01-26 17:53:16.240753462 +0000 UTC m=+0.754225578 container remove f67ce9667fba43a507bdeb29fa8b9ff4584e3c6cb0ccb00255c03cc623c87f07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 12:53:16 np0005596060 systemd[1]: libpod-conmon-f67ce9667fba43a507bdeb29fa8b9ff4584e3c6cb0ccb00255c03cc623c87f07.scope: Deactivated successfully.
Jan 26 12:53:16 np0005596060 podman[160214]: 2026-01-26 17:53:16.449422315 +0000 UTC m=+0.088932766 container create 75484ad0884875cc6434d9c2657974f4ee90ee6e36f25af656a8373f121b052d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 12:53:16 np0005596060 podman[160214]: 2026-01-26 17:53:16.38786388 +0000 UTC m=+0.027374351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:53:16 np0005596060 systemd[1]: Started libpod-conmon-75484ad0884875cc6434d9c2657974f4ee90ee6e36f25af656a8373f121b052d.scope.
Jan 26 12:53:16 np0005596060 systemd[1]: session-48.scope: Deactivated successfully.
Jan 26 12:53:16 np0005596060 systemd[1]: session-48.scope: Consumed 1min 71ms CPU time.
Jan 26 12:53:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:16.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:16 np0005596060 systemd-logind[786]: Session 48 logged out. Waiting for processes to exit.
Jan 26 12:53:16 np0005596060 systemd-logind[786]: Removed session 48.
Jan 26 12:53:16 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:53:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ea84b0ae17ba072ba35b4cf046e20f18b0f289f9300f0912ff6a33917489ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ea84b0ae17ba072ba35b4cf046e20f18b0f289f9300f0912ff6a33917489ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ea84b0ae17ba072ba35b4cf046e20f18b0f289f9300f0912ff6a33917489ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ea84b0ae17ba072ba35b4cf046e20f18b0f289f9300f0912ff6a33917489ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ea84b0ae17ba072ba35b4cf046e20f18b0f289f9300f0912ff6a33917489ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:16 np0005596060 podman[160214]: 2026-01-26 17:53:16.592125745 +0000 UTC m=+0.231636206 container init 75484ad0884875cc6434d9c2657974f4ee90ee6e36f25af656a8373f121b052d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 12:53:16 np0005596060 podman[160214]: 2026-01-26 17:53:16.600805388 +0000 UTC m=+0.240315839 container start 75484ad0884875cc6434d9c2657974f4ee90ee6e36f25af656a8373f121b052d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:53:16 np0005596060 podman[160214]: 2026-01-26 17:53:16.604806596 +0000 UTC m=+0.244317067 container attach 75484ad0884875cc6434d9c2657974f4ee90ee6e36f25af656a8373f121b052d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.746 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[14a589ed-cd13-4052-9cf9-63b3a100f083]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.750 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, column=external_ids, values=({'neutron:ovn-metadata-id': 'aed93dbf-3966-5bf9-8ae5-2b3a3f1eb048'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.760 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.766 159331 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.766 159331 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.766 159331 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.767 159331 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.767 159331 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.767 159331 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.767 159331 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.767 159331 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.767 159331 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.768 159331 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.768 159331 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.768 159331 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.768 159331 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.768 159331 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.768 159331 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.769 159331 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.769 159331 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.769 159331 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.769 159331 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.769 159331 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.769 159331 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.770 159331 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.770 159331 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.770 159331 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.770 159331 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.770 159331 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.770 159331 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.771 159331 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.771 159331 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.771 159331 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.771 159331 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.771 159331 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.771 159331 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.772 159331 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.772 159331 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.772 159331 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.772 159331 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.772 159331 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.773 159331 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.773 159331 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.773 159331 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.773 159331 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.773 159331 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.773 159331 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.773 159331 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.774 159331 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.774 159331 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.774 159331 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.774 159331 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.774 159331 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.774 159331 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.775 159331 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.775 159331 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.775 159331 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.775 159331 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.775 159331 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.775 159331 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.775 159331 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.776 159331 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.776 159331 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.776 159331 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.776 159331 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.776 159331 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.776 159331 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.777 159331 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.777 159331 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.777 159331 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.777 159331 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.777 159331 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.777 159331 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.778 159331 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.778 159331 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.778 159331 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.778 159331 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.778 159331 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.778 159331 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.778 159331 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.779 159331 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.779 159331 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.779 159331 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.779 159331 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.779 159331 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.779 159331 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.780 159331 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.780 159331 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.780 159331 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.780 159331 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.780 159331 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.780 159331 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.780 159331 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.781 159331 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.781 159331 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.781 159331 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.781 159331 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.781 159331 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.781 159331 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.782 159331 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.782 159331 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.782 159331 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.782 159331 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.782 159331 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.782 159331 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.782 159331 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.783 159331 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.783 159331 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.783 159331 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.783 159331 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.783 159331 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.783 159331 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.783 159331 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.784 159331 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.784 159331 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.784 159331 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.784 159331 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.784 159331 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.785 159331 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.785 159331 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.785 159331 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.785 159331 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.785 159331 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.786 159331 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.786 159331 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.786 159331 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.786 159331 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.786 159331 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.786 159331 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.786 159331 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.787 159331 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.787 159331 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.787 159331 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.787 159331 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.787 159331 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.787 159331 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.788 159331 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.788 159331 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.788 159331 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.788 159331 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.788 159331 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.788 159331 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.789 159331 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.789 159331 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.789 159331 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.789 159331 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.789 159331 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.789 159331 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.789 159331 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.790 159331 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.790 159331 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.790 159331 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.790 159331 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.790 159331 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.790 159331 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.790 159331 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.791 159331 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.791 159331 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.791 159331 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.791 159331 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.791 159331 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.791 159331 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.791 159331 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.792 159331 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.792 159331 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.792 159331 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.792 159331 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.792 159331 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.792 159331 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.792 159331 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.793 159331 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.793 159331 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.793 159331 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.793 159331 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.793 159331 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.793 159331 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.793 159331 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.794 159331 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.794 159331 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.794 159331 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.794 159331 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.794 159331 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.794 159331 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.795 159331 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.795 159331 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.795 159331 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.795 159331 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.795 159331 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.795 159331 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.795 159331 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.796 159331 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.796 159331 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.796 159331 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.796 159331 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.796 159331 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.796 159331 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.797 159331 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.797 159331 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.797 159331 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.797 159331 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.797 159331 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.797 159331 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.798 159331 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.798 159331 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.798 159331 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.798 159331 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.798 159331 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.798 159331 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.799 159331 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.799 159331 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.799 159331 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.799 159331 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.799 159331 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.799 159331 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.800 159331 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.800 159331 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.800 159331 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.800 159331 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.800 159331 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.800 159331 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.800 159331 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.801 159331 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.801 159331 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.801 159331 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.801 159331 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.801 159331 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.801 159331 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.802 159331 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.802 159331 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.802 159331 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.802 159331 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.802 159331 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.802 159331 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.802 159331 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.803 159331 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.803 159331 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.803 159331 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.803 159331 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.803 159331 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.803 159331 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.804 159331 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.804 159331 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.804 159331 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.804 159331 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.804 159331 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.804 159331 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.804 159331 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.805 159331 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.805 159331 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.805 159331 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.805 159331 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.805 159331 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.806 159331 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.806 159331 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.806 159331 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.806 159331 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.806 159331 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.806 159331 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.807 159331 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.807 159331 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.807 159331 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.807 159331 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.807 159331 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.807 159331 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.808 159331 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.808 159331 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.808 159331 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.808 159331 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.808 159331 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.808 159331 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.809 159331 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.809 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.809 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.809 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.809 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.810 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.810 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.810 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.810 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.810 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.811 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.811 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.811 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.811 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.811 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.811 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.812 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.812 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.812 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.812 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.812 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.813 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.813 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.813 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.813 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.813 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.814 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.814 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.814 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.814 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.814 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.815 159331 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.815 159331 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.815 159331 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.815 159331 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.815 159331 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 12:53:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:53:16.815 159331 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 26 12:53:17 np0005596060 frosty_shannon[160232]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:53:17 np0005596060 frosty_shannon[160232]: --> relative data size: 1.0
Jan 26 12:53:17 np0005596060 frosty_shannon[160232]: --> All data devices are unavailable
Jan 26 12:53:17 np0005596060 systemd[1]: libpod-75484ad0884875cc6434d9c2657974f4ee90ee6e36f25af656a8373f121b052d.scope: Deactivated successfully.
Jan 26 12:53:17 np0005596060 podman[160214]: 2026-01-26 17:53:17.446138653 +0000 UTC m=+1.085649104 container died 75484ad0884875cc6434d9c2657974f4ee90ee6e36f25af656a8373f121b052d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 12:53:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 0 B/s wr, 75 op/s
Jan 26 12:53:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:17.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:17 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a0ea84b0ae17ba072ba35b4cf046e20f18b0f289f9300f0912ff6a33917489ff-merged.mount: Deactivated successfully.
Jan 26 12:53:17 np0005596060 podman[160214]: 2026-01-26 17:53:17.892606901 +0000 UTC m=+1.532117342 container remove 75484ad0884875cc6434d9c2657974f4ee90ee6e36f25af656a8373f121b052d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 26 12:53:17 np0005596060 systemd[1]: libpod-conmon-75484ad0884875cc6434d9c2657974f4ee90ee6e36f25af656a8373f121b052d.scope: Deactivated successfully.
Jan 26 12:53:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:18.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:18 np0005596060 podman[160448]: 2026-01-26 17:53:18.593538545 +0000 UTC m=+0.121453102 container create e37e431d484990b10ed1db199e7045b1ff451170cf29964416170c506b3fc414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 12:53:18 np0005596060 podman[160448]: 2026-01-26 17:53:18.510727289 +0000 UTC m=+0.038641866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:53:18 np0005596060 systemd[1]: Started libpod-conmon-e37e431d484990b10ed1db199e7045b1ff451170cf29964416170c506b3fc414.scope.
Jan 26 12:53:18 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:53:18 np0005596060 podman[160448]: 2026-01-26 17:53:18.881866727 +0000 UTC m=+0.409781304 container init e37e431d484990b10ed1db199e7045b1ff451170cf29964416170c506b3fc414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 12:53:18 np0005596060 podman[160448]: 2026-01-26 17:53:18.889214286 +0000 UTC m=+0.417128853 container start e37e431d484990b10ed1db199e7045b1ff451170cf29964416170c506b3fc414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 12:53:18 np0005596060 amazing_heisenberg[160464]: 167 167
Jan 26 12:53:18 np0005596060 systemd[1]: libpod-e37e431d484990b10ed1db199e7045b1ff451170cf29964416170c506b3fc414.scope: Deactivated successfully.
Jan 26 12:53:19 np0005596060 podman[160448]: 2026-01-26 17:53:19.199996468 +0000 UTC m=+0.727911025 container attach e37e431d484990b10ed1db199e7045b1ff451170cf29964416170c506b3fc414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 12:53:19 np0005596060 podman[160448]: 2026-01-26 17:53:19.203003772 +0000 UTC m=+0.730918369 container died e37e431d484990b10ed1db199e7045b1ff451170cf29964416170c506b3fc414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 12:53:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 12:53:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:53:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:19.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:53:20 np0005596060 systemd[1]: var-lib-containers-storage-overlay-71bea9e3dd32828c8482f95dd18241f3cd523523cb268c75e69c443a51889da1-merged.mount: Deactivated successfully.
Jan 26 12:53:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:20.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:20 np0005596060 podman[160448]: 2026-01-26 17:53:20.719550614 +0000 UTC m=+2.247465191 container remove e37e431d484990b10ed1db199e7045b1ff451170cf29964416170c506b3fc414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_heisenberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 12:53:20 np0005596060 systemd[1]: libpod-conmon-e37e431d484990b10ed1db199e7045b1ff451170cf29964416170c506b3fc414.scope: Deactivated successfully.
Jan 26 12:53:20 np0005596060 podman[160489]: 2026-01-26 17:53:20.947718555 +0000 UTC m=+0.088291761 container create 921e7ec4dba7f52dc284a29ef9c0cb924d7f8d4057fda8cbbf0ae36fd507fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 26 12:53:20 np0005596060 podman[160489]: 2026-01-26 17:53:20.883694339 +0000 UTC m=+0.024267565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:53:20 np0005596060 systemd[1]: Started libpod-conmon-921e7ec4dba7f52dc284a29ef9c0cb924d7f8d4057fda8cbbf0ae36fd507fc13.scope.
Jan 26 12:53:21 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:53:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37054ed1aa26c3df285cacdb09d879e600ec884cb3049695c108b00267f581f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37054ed1aa26c3df285cacdb09d879e600ec884cb3049695c108b00267f581f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37054ed1aa26c3df285cacdb09d879e600ec884cb3049695c108b00267f581f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37054ed1aa26c3df285cacdb09d879e600ec884cb3049695c108b00267f581f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:21 np0005596060 podman[160489]: 2026-01-26 17:53:21.09999623 +0000 UTC m=+0.240569456 container init 921e7ec4dba7f52dc284a29ef9c0cb924d7f8d4057fda8cbbf0ae36fd507fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yalow, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:53:21 np0005596060 podman[160489]: 2026-01-26 17:53:21.108350034 +0000 UTC m=+0.248923230 container start 921e7ec4dba7f52dc284a29ef9c0cb924d7f8d4057fda8cbbf0ae36fd507fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:53:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:21 np0005596060 podman[160489]: 2026-01-26 17:53:21.248889881 +0000 UTC m=+0.389463077 container attach 921e7ec4dba7f52dc284a29ef9c0cb924d7f8d4057fda8cbbf0ae36fd507fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 12:53:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 12:53:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:21.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:21 np0005596060 systemd-logind[786]: New session 49 of user zuul.
Jan 26 12:53:21 np0005596060 systemd[1]: Started Session 49 of User zuul.
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]: {
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:    "1": [
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:        {
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "devices": [
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "/dev/loop3"
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            ],
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "lv_name": "ceph_lv0",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "lv_size": "7511998464",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "name": "ceph_lv0",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "tags": {
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.cluster_name": "ceph",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.crush_device_class": "",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.encrypted": "0",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.osd_id": "1",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.type": "block",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:                "ceph.vdo": "0"
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            },
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "type": "block",
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:            "vg_name": "ceph_vg0"
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:        }
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]:    ]
Jan 26 12:53:21 np0005596060 flamboyant_yalow[160506]: }
Jan 26 12:53:21 np0005596060 systemd[1]: libpod-921e7ec4dba7f52dc284a29ef9c0cb924d7f8d4057fda8cbbf0ae36fd507fc13.scope: Deactivated successfully.
Jan 26 12:53:21 np0005596060 podman[160489]: 2026-01-26 17:53:21.939756369 +0000 UTC m=+1.080329565 container died 921e7ec4dba7f52dc284a29ef9c0cb924d7f8d4057fda8cbbf0ae36fd507fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yalow, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 12:53:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f37054ed1aa26c3df285cacdb09d879e600ec884cb3049695c108b00267f581f-merged.mount: Deactivated successfully.
Jan 26 12:53:22 np0005596060 podman[160489]: 2026-01-26 17:53:22.104309773 +0000 UTC m=+1.244882969 container remove 921e7ec4dba7f52dc284a29ef9c0cb924d7f8d4057fda8cbbf0ae36fd507fc13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:53:22 np0005596060 systemd[1]: libpod-conmon-921e7ec4dba7f52dc284a29ef9c0cb924d7f8d4057fda8cbbf0ae36fd507fc13.scope: Deactivated successfully.
Jan 26 12:53:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:22.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:22 np0005596060 podman[160824]: 2026-01-26 17:53:22.716765252 +0000 UTC m=+0.019521742 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:53:22 np0005596060 python3.9[160796]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:53:22 np0005596060 podman[160824]: 2026-01-26 17:53:22.912756536 +0000 UTC m=+0.215513046 container create 209a4f5421bb099102bca4e214a0bfddc3a5026be15d9bb9b402db35ca7f4686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:53:22 np0005596060 systemd[1]: Started libpod-conmon-209a4f5421bb099102bca4e214a0bfddc3a5026be15d9bb9b402db35ca7f4686.scope.
Jan 26 12:53:22 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:53:23 np0005596060 podman[160824]: 2026-01-26 17:53:23.099103828 +0000 UTC m=+0.401860328 container init 209a4f5421bb099102bca4e214a0bfddc3a5026be15d9bb9b402db35ca7f4686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 12:53:23 np0005596060 podman[160824]: 2026-01-26 17:53:23.107685804 +0000 UTC m=+0.410442274 container start 209a4f5421bb099102bca4e214a0bfddc3a5026be15d9bb9b402db35ca7f4686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 12:53:23 np0005596060 silly_mclaren[160845]: 167 167
Jan 26 12:53:23 np0005596060 systemd[1]: libpod-209a4f5421bb099102bca4e214a0bfddc3a5026be15d9bb9b402db35ca7f4686.scope: Deactivated successfully.
Jan 26 12:53:23 np0005596060 podman[160824]: 2026-01-26 17:53:23.131651726 +0000 UTC m=+0.434408216 container attach 209a4f5421bb099102bca4e214a0bfddc3a5026be15d9bb9b402db35ca7f4686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:53:23 np0005596060 podman[160824]: 2026-01-26 17:53:23.132325193 +0000 UTC m=+0.435081683 container died 209a4f5421bb099102bca4e214a0bfddc3a5026be15d9bb9b402db35ca7f4686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:53:23 np0005596060 systemd[1]: var-lib-containers-storage-overlay-474e0e2e0de1475f30d5cfe53950311b39484b052840db4ed41c0d3930437a2b-merged.mount: Deactivated successfully.
Jan 26 12:53:23 np0005596060 podman[160824]: 2026-01-26 17:53:23.266763521 +0000 UTC m=+0.569520021 container remove 209a4f5421bb099102bca4e214a0bfddc3a5026be15d9bb9b402db35ca7f4686 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_mclaren, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 12:53:23 np0005596060 systemd[1]: libpod-conmon-209a4f5421bb099102bca4e214a0bfddc3a5026be15d9bb9b402db35ca7f4686.scope: Deactivated successfully.
Jan 26 12:53:23 np0005596060 podman[160896]: 2026-01-26 17:53:23.453039301 +0000 UTC m=+0.042692984 container create e0a93414b48cfb10ce521cf43c65565245edde520911a480766794b0d3c8599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:53:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 12:53:23 np0005596060 systemd[1]: Started libpod-conmon-e0a93414b48cfb10ce521cf43c65565245edde520911a480766794b0d3c8599f.scope.
Jan 26 12:53:23 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:53:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9822bad83938d7892a887081e89206b58212a3a011b3775d784c09e45666f786/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9822bad83938d7892a887081e89206b58212a3a011b3775d784c09e45666f786/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9822bad83938d7892a887081e89206b58212a3a011b3775d784c09e45666f786/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9822bad83938d7892a887081e89206b58212a3a011b3775d784c09e45666f786/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:53:23 np0005596060 podman[160896]: 2026-01-26 17:53:23.435038869 +0000 UTC m=+0.024692572 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:53:23 np0005596060 podman[160896]: 2026-01-26 17:53:23.537032402 +0000 UTC m=+0.126686105 container init e0a93414b48cfb10ce521cf43c65565245edde520911a480766794b0d3c8599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Jan 26 12:53:23 np0005596060 podman[160896]: 2026-01-26 17:53:23.544843178 +0000 UTC m=+0.134496861 container start e0a93414b48cfb10ce521cf43c65565245edde520911a480766794b0d3c8599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 12:53:23 np0005596060 podman[160896]: 2026-01-26 17:53:23.549232868 +0000 UTC m=+0.138886581 container attach e0a93414b48cfb10ce521cf43c65565245edde520911a480766794b0d3c8599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meitner, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:53:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:23.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:24 np0005596060 upbeat_meitner[160912]: {
Jan 26 12:53:24 np0005596060 upbeat_meitner[160912]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:53:24 np0005596060 upbeat_meitner[160912]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:53:24 np0005596060 upbeat_meitner[160912]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:53:24 np0005596060 upbeat_meitner[160912]:        "osd_id": 1,
Jan 26 12:53:24 np0005596060 upbeat_meitner[160912]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:53:24 np0005596060 upbeat_meitner[160912]:        "type": "bluestore"
Jan 26 12:53:24 np0005596060 upbeat_meitner[160912]:    }
Jan 26 12:53:24 np0005596060 upbeat_meitner[160912]: }
Jan 26 12:53:24 np0005596060 systemd[1]: libpod-e0a93414b48cfb10ce521cf43c65565245edde520911a480766794b0d3c8599f.scope: Deactivated successfully.
Jan 26 12:53:24 np0005596060 podman[160896]: 2026-01-26 17:53:24.424614713 +0000 UTC m=+1.014268396 container died e0a93414b48cfb10ce521cf43c65565245edde520911a480766794b0d3c8599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meitner, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:53:24 np0005596060 python3.9[161048]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:53:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:24.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:25 np0005596060 systemd[1]: var-lib-containers-storage-overlay-9822bad83938d7892a887081e89206b58212a3a011b3775d784c09e45666f786-merged.mount: Deactivated successfully.
Jan 26 12:53:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 26 12:53:25 np0005596060 podman[160896]: 2026-01-26 17:53:25.518549938 +0000 UTC m=+2.108203621 container remove e0a93414b48cfb10ce521cf43c65565245edde520911a480766794b0d3c8599f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_meitner, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 12:53:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:53:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:25.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:25 np0005596060 systemd[1]: libpod-conmon-e0a93414b48cfb10ce521cf43c65565245edde520911a480766794b0d3c8599f.scope: Deactivated successfully.
Jan 26 12:53:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:53:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:53:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:53:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:26.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:53:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:53:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:53:26 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 85ec5acf-b9bd-43f0-baee-87a9026c89fd does not exist
Jan 26 12:53:26 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 532bd25a-af59-4ebe-82b1-d4d0493f4446 does not exist
Jan 26 12:53:26 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 79d89568-78ca-4de6-a855-3ea4f38daacb does not exist
Jan 26 12:53:26 np0005596060 python3.9[161239]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 12:53:26 np0005596060 systemd[1]: Reloading.
Jan 26 12:53:26 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:53:26 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:53:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 26 12:53:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:27.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:53:28 np0005596060 python3.9[161474]: ansible-ansible.builtin.service_facts Invoked
Jan 26 12:53:28 np0005596060 network[161491]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 12:53:28 np0005596060 network[161492]: 'network-scripts' will be removed from distribution in near future.
Jan 26 12:53:28 np0005596060 network[161493]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 12:53:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:53:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:28.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:53:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:29.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:53:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:30.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:53:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:53:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:31.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:53:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:32.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:53:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:33.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:53:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:34.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:35 np0005596060 python3.9[161758]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:53:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:35.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:35 np0005596060 python3.9[161912]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:53:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:53:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:36.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:53:36 np0005596060 python3.9[162065]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:53:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:53:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:37.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:53:37 np0005596060 python3.9[162219]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:53:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:38.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:38 np0005596060 python3.9[162372]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:53:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:39 np0005596060 python3.9[162575]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:53:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:39.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:40 np0005596060 python3.9[162729]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:53:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:40.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:53:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:41.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:53:42 np0005596060 python3.9[162883]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:53:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:42.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:53:42 np0005596060 podman[162960]: 2026-01-26 17:53:42.857282976 +0000 UTC m=+0.105022159 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 12:53:43 np0005596060 python3.9[163054]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:53:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:43.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:53:43 np0005596060 python3.9[163207]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:53:44
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.meta', 'volumes', 'vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:53:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:53:44 np0005596060 python3.9[163359]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:44.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:45 np0005596060 podman[163511]: 2026-01-26 17:53:44.99946082 +0000 UTC m=+0.096006024 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 12:53:45 np0005596060 python3.9[163512]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:53:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:45.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:53:45 np0005596060 python3.9[163689]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:46 np0005596060 python3.9[163841]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:46.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:47 np0005596060 python3.9[163993]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:53:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:47.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:53:47 np0005596060 python3.9[164146]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:48.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:48 np0005596060 python3.9[164298]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:49 np0005596060 python3.9[164450]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:49.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:50 np0005596060 python3.9[164603]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:50.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:50 np0005596060 python3.9[164755]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:51 np0005596060 python3.9[164907]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:53:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:51.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:52 np0005596060 python3.9[165060]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:53:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:52.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:53 np0005596060 python3.9[165212]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 12:53:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:53.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:53 np0005596060 python3.9[165365]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 12:53:53 np0005596060 systemd[1]: Reloading.
Jan 26 12:53:54 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:53:54 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:53:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:54.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:55 np0005596060 python3.9[165551]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:53:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:55.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:56 np0005596060 python3.9[165705]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:53:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:53:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:56.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:56 np0005596060 python3.9[165858]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:53:57 np0005596060 python3.9[166011]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:53:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:57.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:58 np0005596060 python3.9[166165]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:53:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:53:58.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:59 np0005596060 python3.9[166368]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:53:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:53:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:53:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:53:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:53:59.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:53:59 np0005596060 python3.9[166522]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:54:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:54:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:00.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:54:01 np0005596060 python3.9[166675]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 26 12:54:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:01.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:02 np0005596060 python3.9[166829]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 12:54:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:02.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:03 np0005596060 python3.9[166987]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:54:03 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:54:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:03.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:04.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:05.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:05 np0005596060 python3.9[167150]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:54:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:06.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:06 np0005596060 python3.9[167234]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:54:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:07.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:08.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:09.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:10.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:11.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:12.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:13.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:13 np0005596060 podman[167257]: 2026-01-26 17:54:13.826220429 +0000 UTC m=+0.075449937 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 12:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:54:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:14.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:54:14.722 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 12:54:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:54:14.724 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 12:54:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:54:14.724 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 12:54:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:15.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:15 np0005596060 podman[167335]: 2026-01-26 17:54:15.834407055 +0000 UTC m=+0.088871984 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 12:54:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:16.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:17.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:18.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:19.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:20.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:21.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:23.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:24.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:25.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:26.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:27.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:54:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:28.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:54:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev f1f4b3dd-d72f-4926-90d6-e308c5ee6519 does not exist
Jan 26 12:54:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 325766d4-5ed6-4857-b511-b4d81dfbb374 does not exist
Jan 26 12:54:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev eaada39b-9b32-453b-b0df-74ac7b1d9179 does not exist
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:54:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:54:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:29 np0005596060 podman[167799]: 2026-01-26 17:54:29.536739783 +0000 UTC m=+0.053248266 container create 3d08f56154c6959a8f8b3242212ab27333342150845db0e1ae7c2e65aa9a1319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brown, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:54:29 np0005596060 systemd[1]: Started libpod-conmon-3d08f56154c6959a8f8b3242212ab27333342150845db0e1ae7c2e65aa9a1319.scope.
Jan 26 12:54:29 np0005596060 podman[167799]: 2026-01-26 17:54:29.512139006 +0000 UTC m=+0.028647509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:54:29 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:54:29 np0005596060 podman[167799]: 2026-01-26 17:54:29.662746741 +0000 UTC m=+0.179255274 container init 3d08f56154c6959a8f8b3242212ab27333342150845db0e1ae7c2e65aa9a1319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brown, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:54:29 np0005596060 podman[167799]: 2026-01-26 17:54:29.671933581 +0000 UTC m=+0.188442064 container start 3d08f56154c6959a8f8b3242212ab27333342150845db0e1ae7c2e65aa9a1319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 12:54:29 np0005596060 podman[167799]: 2026-01-26 17:54:29.676150137 +0000 UTC m=+0.192658670 container attach 3d08f56154c6959a8f8b3242212ab27333342150845db0e1ae7c2e65aa9a1319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:54:29 np0005596060 vigilant_brown[167816]: 167 167
Jan 26 12:54:29 np0005596060 systemd[1]: libpod-3d08f56154c6959a8f8b3242212ab27333342150845db0e1ae7c2e65aa9a1319.scope: Deactivated successfully.
Jan 26 12:54:29 np0005596060 podman[167799]: 2026-01-26 17:54:29.679265175 +0000 UTC m=+0.195773658 container died 3d08f56154c6959a8f8b3242212ab27333342150845db0e1ae7c2e65aa9a1319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brown, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 12:54:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:29.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:29 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ca36505fb9679bd53e90707815dd2a7323605900671061b2455169d544431407-merged.mount: Deactivated successfully.
Jan 26 12:54:29 np0005596060 podman[167799]: 2026-01-26 17:54:29.730628552 +0000 UTC m=+0.247137035 container remove 3d08f56154c6959a8f8b3242212ab27333342150845db0e1ae7c2e65aa9a1319 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_brown, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:54:29 np0005596060 systemd[1]: libpod-conmon-3d08f56154c6959a8f8b3242212ab27333342150845db0e1ae7c2e65aa9a1319.scope: Deactivated successfully.
Jan 26 12:54:29 np0005596060 podman[167840]: 2026-01-26 17:54:29.933707382 +0000 UTC m=+0.070195120 container create 6b3654e2320307c522b95383ef2be04fb46b3e7d7e3d34e1ae2298d290aa74a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:54:29 np0005596060 systemd[1]: Started libpod-conmon-6b3654e2320307c522b95383ef2be04fb46b3e7d7e3d34e1ae2298d290aa74a8.scope.
Jan 26 12:54:30 np0005596060 podman[167840]: 2026-01-26 17:54:29.907795343 +0000 UTC m=+0.044283071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:54:30 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:54:30 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c365e6492246052f44741d9e68171bfd612b85c2b5c19cfdd46e5aaf1c1ed3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:30 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c365e6492246052f44741d9e68171bfd612b85c2b5c19cfdd46e5aaf1c1ed3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:30 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c365e6492246052f44741d9e68171bfd612b85c2b5c19cfdd46e5aaf1c1ed3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:30 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c365e6492246052f44741d9e68171bfd612b85c2b5c19cfdd46e5aaf1c1ed3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:30 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c365e6492246052f44741d9e68171bfd612b85c2b5c19cfdd46e5aaf1c1ed3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:30 np0005596060 podman[167840]: 2026-01-26 17:54:30.047832173 +0000 UTC m=+0.184319891 container init 6b3654e2320307c522b95383ef2be04fb46b3e7d7e3d34e1ae2298d290aa74a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:54:30 np0005596060 podman[167840]: 2026-01-26 17:54:30.056349826 +0000 UTC m=+0.192837524 container start 6b3654e2320307c522b95383ef2be04fb46b3e7d7e3d34e1ae2298d290aa74a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:54:30 np0005596060 podman[167840]: 2026-01-26 17:54:30.060591963 +0000 UTC m=+0.197079671 container attach 6b3654e2320307c522b95383ef2be04fb46b3e7d7e3d34e1ae2298d290aa74a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:54:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:54:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:54:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:30.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:30 np0005596060 stupefied_lehmann[167857]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:54:30 np0005596060 stupefied_lehmann[167857]: --> relative data size: 1.0
Jan 26 12:54:30 np0005596060 stupefied_lehmann[167857]: --> All data devices are unavailable
Jan 26 12:54:30 np0005596060 systemd[1]: libpod-6b3654e2320307c522b95383ef2be04fb46b3e7d7e3d34e1ae2298d290aa74a8.scope: Deactivated successfully.
Jan 26 12:54:30 np0005596060 podman[167840]: 2026-01-26 17:54:30.986018458 +0000 UTC m=+1.122506156 container died 6b3654e2320307c522b95383ef2be04fb46b3e7d7e3d34e1ae2298d290aa74a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 12:54:31 np0005596060 systemd[1]: var-lib-containers-storage-overlay-09c365e6492246052f44741d9e68171bfd612b85c2b5c19cfdd46e5aaf1c1ed3-merged.mount: Deactivated successfully.
Jan 26 12:54:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:31 np0005596060 podman[167840]: 2026-01-26 17:54:31.645110658 +0000 UTC m=+1.781598386 container remove 6b3654e2320307c522b95383ef2be04fb46b3e7d7e3d34e1ae2298d290aa74a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 12:54:31 np0005596060 systemd[1]: libpod-conmon-6b3654e2320307c522b95383ef2be04fb46b3e7d7e3d34e1ae2298d290aa74a8.scope: Deactivated successfully.
Jan 26 12:54:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:31.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:32 np0005596060 podman[168028]: 2026-01-26 17:54:32.401404575 +0000 UTC m=+0.037495591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:54:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:33.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:33 np0005596060 podman[168028]: 2026-01-26 17:54:33.739424081 +0000 UTC m=+1.375515007 container create 707a6fec15d5f9be425870e18e89964222fe8b65f260e49dd69e561376d9061d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shockley, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:54:33 np0005596060 systemd[1]: Started libpod-conmon-707a6fec15d5f9be425870e18e89964222fe8b65f260e49dd69e561376d9061d.scope.
Jan 26 12:54:34 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:54:34 np0005596060 podman[168028]: 2026-01-26 17:54:34.116742008 +0000 UTC m=+1.752833014 container init 707a6fec15d5f9be425870e18e89964222fe8b65f260e49dd69e561376d9061d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shockley, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 26 12:54:34 np0005596060 podman[168028]: 2026-01-26 17:54:34.131081998 +0000 UTC m=+1.767172964 container start 707a6fec15d5f9be425870e18e89964222fe8b65f260e49dd69e561376d9061d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 12:54:34 np0005596060 cranky_shockley[168046]: 167 167
Jan 26 12:54:34 np0005596060 systemd[1]: libpod-707a6fec15d5f9be425870e18e89964222fe8b65f260e49dd69e561376d9061d.scope: Deactivated successfully.
Jan 26 12:54:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:34.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:34 np0005596060 podman[168028]: 2026-01-26 17:54:34.707485995 +0000 UTC m=+2.343577001 container attach 707a6fec15d5f9be425870e18e89964222fe8b65f260e49dd69e561376d9061d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:54:34 np0005596060 podman[168028]: 2026-01-26 17:54:34.708167742 +0000 UTC m=+2.344258688 container died 707a6fec15d5f9be425870e18e89964222fe8b65f260e49dd69e561376d9061d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:54:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:35.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:35 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ae3f15f54494573f8396c68cf9948cd1e8b51016b936d5962c142827d93668b6-merged.mount: Deactivated successfully.
Jan 26 12:54:35 np0005596060 podman[168028]: 2026-01-26 17:54:35.828989805 +0000 UTC m=+3.465080761 container remove 707a6fec15d5f9be425870e18e89964222fe8b65f260e49dd69e561376d9061d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_shockley, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:54:35 np0005596060 systemd[1]: libpod-conmon-707a6fec15d5f9be425870e18e89964222fe8b65f260e49dd69e561376d9061d.scope: Deactivated successfully.
Jan 26 12:54:36 np0005596060 podman[168071]: 2026-01-26 17:54:36.058273062 +0000 UTC m=+0.049030760 container create e94d9ec6a7c0be94a9ca98fd2a260c07337084821b11b9a0cd54d4c950530e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:54:36 np0005596060 systemd[1]: Started libpod-conmon-e94d9ec6a7c0be94a9ca98fd2a260c07337084821b11b9a0cd54d4c950530e16.scope.
Jan 26 12:54:36 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:54:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047232748e47f15f7f395725c6a18fa1d9f819757d663185543ce44bd2a782dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:36 np0005596060 podman[168071]: 2026-01-26 17:54:36.036722192 +0000 UTC m=+0.027479870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:54:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047232748e47f15f7f395725c6a18fa1d9f819757d663185543ce44bd2a782dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047232748e47f15f7f395725c6a18fa1d9f819757d663185543ce44bd2a782dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047232748e47f15f7f395725c6a18fa1d9f819757d663185543ce44bd2a782dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:36 np0005596060 podman[168071]: 2026-01-26 17:54:36.146310119 +0000 UTC m=+0.137067817 container init e94d9ec6a7c0be94a9ca98fd2a260c07337084821b11b9a0cd54d4c950530e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 12:54:36 np0005596060 podman[168071]: 2026-01-26 17:54:36.16032538 +0000 UTC m=+0.151083048 container start e94d9ec6a7c0be94a9ca98fd2a260c07337084821b11b9a0cd54d4c950530e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:54:36 np0005596060 podman[168071]: 2026-01-26 17:54:36.166587747 +0000 UTC m=+0.157345435 container attach e94d9ec6a7c0be94a9ca98fd2a260c07337084821b11b9a0cd54d4c950530e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:54:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:36.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]: {
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:    "1": [
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:        {
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "devices": [
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "/dev/loop3"
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            ],
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "lv_name": "ceph_lv0",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "lv_size": "7511998464",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "name": "ceph_lv0",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "tags": {
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.cluster_name": "ceph",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.crush_device_class": "",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.encrypted": "0",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.osd_id": "1",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.type": "block",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:                "ceph.vdo": "0"
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            },
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "type": "block",
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:            "vg_name": "ceph_vg0"
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:        }
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]:    ]
Jan 26 12:54:36 np0005596060 gallant_joliot[168088]: }
Jan 26 12:54:36 np0005596060 podman[168071]: 2026-01-26 17:54:36.946348481 +0000 UTC m=+0.937106139 container died e94d9ec6a7c0be94a9ca98fd2a260c07337084821b11b9a0cd54d4c950530e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:54:36 np0005596060 systemd[1]: libpod-e94d9ec6a7c0be94a9ca98fd2a260c07337084821b11b9a0cd54d4c950530e16.scope: Deactivated successfully.
Jan 26 12:54:36 np0005596060 systemd[1]: var-lib-containers-storage-overlay-047232748e47f15f7f395725c6a18fa1d9f819757d663185543ce44bd2a782dc-merged.mount: Deactivated successfully.
Jan 26 12:54:37 np0005596060 podman[168071]: 2026-01-26 17:54:37.020758096 +0000 UTC m=+1.011515744 container remove e94d9ec6a7c0be94a9ca98fd2a260c07337084821b11b9a0cd54d4c950530e16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 26 12:54:37 np0005596060 systemd[1]: libpod-conmon-e94d9ec6a7c0be94a9ca98fd2a260c07337084821b11b9a0cd54d4c950530e16.scope: Deactivated successfully.
Jan 26 12:54:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:37.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:37 np0005596060 podman[168254]: 2026-01-26 17:54:37.678161953 +0000 UTC m=+0.025911830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:54:37 np0005596060 podman[168254]: 2026-01-26 17:54:37.955096455 +0000 UTC m=+0.302846312 container create ace87ebed174b1e2344b1019e0bf3fae0f5866778a75be8559dd27bfe48d940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 12:54:38 np0005596060 systemd[1]: Started libpod-conmon-ace87ebed174b1e2344b1019e0bf3fae0f5866778a75be8559dd27bfe48d940b.scope.
Jan 26 12:54:38 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:54:38 np0005596060 podman[168254]: 2026-01-26 17:54:38.072161539 +0000 UTC m=+0.419911466 container init ace87ebed174b1e2344b1019e0bf3fae0f5866778a75be8559dd27bfe48d940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:54:38 np0005596060 podman[168254]: 2026-01-26 17:54:38.079568835 +0000 UTC m=+0.427318672 container start ace87ebed174b1e2344b1019e0bf3fae0f5866778a75be8559dd27bfe48d940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_allen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 12:54:38 np0005596060 podman[168254]: 2026-01-26 17:54:38.083649527 +0000 UTC m=+0.431399414 container attach ace87ebed174b1e2344b1019e0bf3fae0f5866778a75be8559dd27bfe48d940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 26 12:54:38 np0005596060 admiring_allen[168270]: 167 167
Jan 26 12:54:38 np0005596060 systemd[1]: libpod-ace87ebed174b1e2344b1019e0bf3fae0f5866778a75be8559dd27bfe48d940b.scope: Deactivated successfully.
Jan 26 12:54:38 np0005596060 podman[168254]: 2026-01-26 17:54:38.088056787 +0000 UTC m=+0.435806664 container died ace87ebed174b1e2344b1019e0bf3fae0f5866778a75be8559dd27bfe48d940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_allen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 12:54:38 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7c3d285978bc8267b7e5d7982cd26eb5b14d8f09d0d0b5529eea6e7de2099761-merged.mount: Deactivated successfully.
Jan 26 12:54:38 np0005596060 podman[168254]: 2026-01-26 17:54:38.145116538 +0000 UTC m=+0.492866385 container remove ace87ebed174b1e2344b1019e0bf3fae0f5866778a75be8559dd27bfe48d940b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:54:38 np0005596060 systemd[1]: libpod-conmon-ace87ebed174b1e2344b1019e0bf3fae0f5866778a75be8559dd27bfe48d940b.scope: Deactivated successfully.
Jan 26 12:54:38 np0005596060 podman[168294]: 2026-01-26 17:54:38.340469834 +0000 UTC m=+0.051887261 container create 4a12b20fc07a7c9bc62ce7abaccf31f359431494fdf236a3dbc9c893f2dccf06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:54:38 np0005596060 systemd[1]: Started libpod-conmon-4a12b20fc07a7c9bc62ce7abaccf31f359431494fdf236a3dbc9c893f2dccf06.scope.
Jan 26 12:54:38 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:54:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b60360a493856d4aafd26c07f33182045e58dc9a8a8f518702fc3c28e57d88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b60360a493856d4aafd26c07f33182045e58dc9a8a8f518702fc3c28e57d88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b60360a493856d4aafd26c07f33182045e58dc9a8a8f518702fc3c28e57d88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41b60360a493856d4aafd26c07f33182045e58dc9a8a8f518702fc3c28e57d88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:54:38 np0005596060 podman[168294]: 2026-01-26 17:54:38.321118309 +0000 UTC m=+0.032535756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:54:38 np0005596060 podman[168294]: 2026-01-26 17:54:38.431862745 +0000 UTC m=+0.143280192 container init 4a12b20fc07a7c9bc62ce7abaccf31f359431494fdf236a3dbc9c893f2dccf06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ganguly, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:54:38 np0005596060 podman[168294]: 2026-01-26 17:54:38.439477336 +0000 UTC m=+0.150894763 container start 4a12b20fc07a7c9bc62ce7abaccf31f359431494fdf236a3dbc9c893f2dccf06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:54:38 np0005596060 podman[168294]: 2026-01-26 17:54:38.443591659 +0000 UTC m=+0.155009116 container attach 4a12b20fc07a7c9bc62ce7abaccf31f359431494fdf236a3dbc9c893f2dccf06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ganguly, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 12:54:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:38.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:39 np0005596060 kernel: SELinux:  Converting 2777 SID table entries...
Jan 26 12:54:39 np0005596060 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 12:54:39 np0005596060 kernel: SELinux:  policy capability open_perms=1
Jan 26 12:54:39 np0005596060 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 12:54:39 np0005596060 kernel: SELinux:  policy capability always_check_network=0
Jan 26 12:54:39 np0005596060 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 12:54:39 np0005596060 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 12:54:39 np0005596060 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 12:54:39 np0005596060 adoring_ganguly[168310]: {
Jan 26 12:54:39 np0005596060 adoring_ganguly[168310]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:54:39 np0005596060 adoring_ganguly[168310]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:54:39 np0005596060 adoring_ganguly[168310]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:54:39 np0005596060 adoring_ganguly[168310]:        "osd_id": 1,
Jan 26 12:54:39 np0005596060 adoring_ganguly[168310]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:54:39 np0005596060 adoring_ganguly[168310]:        "type": "bluestore"
Jan 26 12:54:39 np0005596060 adoring_ganguly[168310]:    }
Jan 26 12:54:39 np0005596060 adoring_ganguly[168310]: }
Jan 26 12:54:39 np0005596060 systemd[1]: libpod-4a12b20fc07a7c9bc62ce7abaccf31f359431494fdf236a3dbc9c893f2dccf06.scope: Deactivated successfully.
Jan 26 12:54:39 np0005596060 podman[168294]: 2026-01-26 17:54:39.390147414 +0000 UTC m=+1.101564831 container died 4a12b20fc07a7c9bc62ce7abaccf31f359431494fdf236a3dbc9c893f2dccf06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ganguly, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:54:39 np0005596060 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 26 12:54:39 np0005596060 systemd[1]: var-lib-containers-storage-overlay-41b60360a493856d4aafd26c07f33182045e58dc9a8a8f518702fc3c28e57d88-merged.mount: Deactivated successfully.
Jan 26 12:54:39 np0005596060 podman[168294]: 2026-01-26 17:54:39.478052487 +0000 UTC m=+1.189469944 container remove 4a12b20fc07a7c9bc62ce7abaccf31f359431494fdf236a3dbc9c893f2dccf06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ganguly, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 12:54:39 np0005596060 systemd[1]: libpod-conmon-4a12b20fc07a7c9bc62ce7abaccf31f359431494fdf236a3dbc9c893f2dccf06.scope: Deactivated successfully.
Jan 26 12:54:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:54:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:54:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:54:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:54:39 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 139c4f09-4b61-4df3-aa4c-76ae901e8470 does not exist
Jan 26 12:54:39 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3b19b82f-2ddc-428b-9a9c-3d271fb49611 does not exist
Jan 26 12:54:39 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ac7a6c1b-1b51-4e26-885c-4263a1612907 does not exist
Jan 26 12:54:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:39.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:40.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:54:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:54:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:41.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:42.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:43.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:54:44
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'volumes', '.rgw.root', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms']
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:54:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:54:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:44.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:44 np0005596060 podman[168452]: 2026-01-26 17:54:44.843478719 +0000 UTC m=+0.083631388 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 12:54:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:54:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:45.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:54:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:46.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:46 np0005596060 podman[168474]: 2026-01-26 17:54:46.876119406 +0000 UTC m=+0.139239501 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 12:54:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:47.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:48.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:49.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:50.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:51 np0005596060 kernel: SELinux:  Converting 2777 SID table entries...
Jan 26 12:54:51 np0005596060 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 12:54:51 np0005596060 kernel: SELinux:  policy capability open_perms=1
Jan 26 12:54:51 np0005596060 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 12:54:51 np0005596060 kernel: SELinux:  policy capability always_check_network=0
Jan 26 12:54:51 np0005596060 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 12:54:51 np0005596060 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 12:54:51 np0005596060 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 12:54:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:52.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:53.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:54.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:54:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:55.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:54:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:54:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:56.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:57.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:54:58.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:54:59 np0005596060 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 26 12:54:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:54:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:54:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:54:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:54:59.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:00.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:01.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:02.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:55:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:03.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:04.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:05.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:06.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 12:55:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:07.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 12:55:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:08.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:09.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:10.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:11.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:12.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:13.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:55:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:55:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:55:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:55:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:55:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:55:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:14.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:55:14.723 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 12:55:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:55:14.725 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 12:55:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:55:14.725 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 12:55:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:15.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:15 np0005596060 podman[173739]: 2026-01-26 17:55:15.835105469 +0000 UTC m=+0.082766785 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 12:55:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:16.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:17.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:17 np0005596060 podman[174793]: 2026-01-26 17:55:17.861405857 +0000 UTC m=+0.107936876 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 26 12:55:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:18.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:19.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 12:55:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:20.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 12:55:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:21.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:22.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:23.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:24.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:25.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:26.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:27.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:28.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:29.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:30.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:31.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:32.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:55:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:33.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:55:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:34.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:35.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:36.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:37.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:38.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:55:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:39.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:55:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:40.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:55:41 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 96673b03-9f18-48a0-9f01-6c5fe0ee6bbd does not exist
Jan 26 12:55:41 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d6b5d11f-706b-473a-9a7f-a135996e1e6f does not exist
Jan 26 12:55:41 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev a88c2ee0-49fe-45b6-b4ff-e03d3f97b941 does not exist
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:55:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:55:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:55:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:42 np0005596060 podman[185875]: 2026-01-26 17:55:42.334489043 +0000 UTC m=+0.064696410 container create 9535550902b1964658f9c3c81be49541ebe65dcc8934c6dce34f7694c87c2b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:55:42 np0005596060 systemd[1]: Started libpod-conmon-9535550902b1964658f9c3c81be49541ebe65dcc8934c6dce34f7694c87c2b69.scope.
Jan 26 12:55:42 np0005596060 podman[185875]: 2026-01-26 17:55:42.301645566 +0000 UTC m=+0.031852983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:55:42 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:55:42 np0005596060 podman[185875]: 2026-01-26 17:55:42.437605846 +0000 UTC m=+0.167813263 container init 9535550902b1964658f9c3c81be49541ebe65dcc8934c6dce34f7694c87c2b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 26 12:55:42 np0005596060 podman[185875]: 2026-01-26 17:55:42.449901052 +0000 UTC m=+0.180108379 container start 9535550902b1964658f9c3c81be49541ebe65dcc8934c6dce34f7694c87c2b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 12:55:42 np0005596060 podman[185875]: 2026-01-26 17:55:42.454260761 +0000 UTC m=+0.184468168 container attach 9535550902b1964658f9c3c81be49541ebe65dcc8934c6dce34f7694c87c2b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 12:55:42 np0005596060 mystifying_elion[185892]: 167 167
Jan 26 12:55:42 np0005596060 systemd[1]: libpod-9535550902b1964658f9c3c81be49541ebe65dcc8934c6dce34f7694c87c2b69.scope: Deactivated successfully.
Jan 26 12:55:42 np0005596060 podman[185875]: 2026-01-26 17:55:42.464425323 +0000 UTC m=+0.194632700 container died 9535550902b1964658f9c3c81be49541ebe65dcc8934c6dce34f7694c87c2b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:55:42 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8ef728684fb8fd53b3e4eef452bacad56133816561296800c84184ac7e4c18c7-merged.mount: Deactivated successfully.
Jan 26 12:55:42 np0005596060 podman[185875]: 2026-01-26 17:55:42.526450465 +0000 UTC m=+0.256657832 container remove 9535550902b1964658f9c3c81be49541ebe65dcc8934c6dce34f7694c87c2b69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:55:42 np0005596060 systemd[1]: libpod-conmon-9535550902b1964658f9c3c81be49541ebe65dcc8934c6dce34f7694c87c2b69.scope: Deactivated successfully.
Jan 26 12:55:42 np0005596060 podman[185917]: 2026-01-26 17:55:42.731002591 +0000 UTC m=+0.051741808 container create ec3f1d87161b0987909a930b4cd801fbbb42224ae79037959ff891d6edc7faea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 12:55:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:42.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:42 np0005596060 systemd[1]: Started libpod-conmon-ec3f1d87161b0987909a930b4cd801fbbb42224ae79037959ff891d6edc7faea.scope.
Jan 26 12:55:42 np0005596060 podman[185917]: 2026-01-26 17:55:42.707576258 +0000 UTC m=+0.028315555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:55:42 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:55:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844d5c842c89f24fd09ad352fa4c0db4f103f601ca373a8d2ae868be0ea0970b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844d5c842c89f24fd09ad352fa4c0db4f103f601ca373a8d2ae868be0ea0970b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844d5c842c89f24fd09ad352fa4c0db4f103f601ca373a8d2ae868be0ea0970b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844d5c842c89f24fd09ad352fa4c0db4f103f601ca373a8d2ae868be0ea0970b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/844d5c842c89f24fd09ad352fa4c0db4f103f601ca373a8d2ae868be0ea0970b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:42 np0005596060 podman[185917]: 2026-01-26 17:55:42.83552454 +0000 UTC m=+0.156263777 container init ec3f1d87161b0987909a930b4cd801fbbb42224ae79037959ff891d6edc7faea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:55:42 np0005596060 podman[185917]: 2026-01-26 17:55:42.848892442 +0000 UTC m=+0.169631659 container start ec3f1d87161b0987909a930b4cd801fbbb42224ae79037959ff891d6edc7faea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:55:42 np0005596060 podman[185917]: 2026-01-26 17:55:42.855089056 +0000 UTC m=+0.175828303 container attach ec3f1d87161b0987909a930b4cd801fbbb42224ae79037959ff891d6edc7faea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 12:55:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:43 np0005596060 jovial_pare[185934]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:55:43 np0005596060 jovial_pare[185934]: --> relative data size: 1.0
Jan 26 12:55:43 np0005596060 jovial_pare[185934]: --> All data devices are unavailable
Jan 26 12:55:43 np0005596060 systemd[1]: libpod-ec3f1d87161b0987909a930b4cd801fbbb42224ae79037959ff891d6edc7faea.scope: Deactivated successfully.
Jan 26 12:55:43 np0005596060 podman[185917]: 2026-01-26 17:55:43.750749214 +0000 UTC m=+1.071488431 container died ec3f1d87161b0987909a930b4cd801fbbb42224ae79037959ff891d6edc7faea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:55:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:55:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:43.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:55:44
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.control', 'vms', 'images', 'backups']
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:55:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:55:44 np0005596060 systemd[1]: var-lib-containers-storage-overlay-844d5c842c89f24fd09ad352fa4c0db4f103f601ca373a8d2ae868be0ea0970b-merged.mount: Deactivated successfully.
Jan 26 12:55:44 np0005596060 podman[185917]: 2026-01-26 17:55:44.777608393 +0000 UTC m=+2.098347640 container remove ec3f1d87161b0987909a930b4cd801fbbb42224ae79037959ff891d6edc7faea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_pare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 12:55:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:44.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:44 np0005596060 systemd[1]: libpod-conmon-ec3f1d87161b0987909a930b4cd801fbbb42224ae79037959ff891d6edc7faea.scope: Deactivated successfully.
Jan 26 12:55:45 np0005596060 podman[186105]: 2026-01-26 17:55:45.555920703 +0000 UTC m=+0.039982605 container create eca7439a193d4f26dcbbbe4234cadb20658c9d2216401bd9d3c5e24ed6db45b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:55:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:45 np0005596060 systemd[1]: Started libpod-conmon-eca7439a193d4f26dcbbbe4234cadb20658c9d2216401bd9d3c5e24ed6db45b5.scope.
Jan 26 12:55:45 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:55:45 np0005596060 podman[186105]: 2026-01-26 17:55:45.540476669 +0000 UTC m=+0.024538601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:55:45 np0005596060 podman[186105]: 2026-01-26 17:55:45.636422925 +0000 UTC m=+0.120484847 container init eca7439a193d4f26dcbbbe4234cadb20658c9d2216401bd9d3c5e24ed6db45b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 12:55:45 np0005596060 podman[186105]: 2026-01-26 17:55:45.642921696 +0000 UTC m=+0.126983598 container start eca7439a193d4f26dcbbbe4234cadb20658c9d2216401bd9d3c5e24ed6db45b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:55:45 np0005596060 magical_yalow[186122]: 167 167
Jan 26 12:55:45 np0005596060 systemd[1]: libpod-eca7439a193d4f26dcbbbe4234cadb20658c9d2216401bd9d3c5e24ed6db45b5.scope: Deactivated successfully.
Jan 26 12:55:45 np0005596060 conmon[186122]: conmon eca7439a193d4f26dcbb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eca7439a193d4f26dcbbbe4234cadb20658c9d2216401bd9d3c5e24ed6db45b5.scope/container/memory.events
Jan 26 12:55:45 np0005596060 podman[186105]: 2026-01-26 17:55:45.648963577 +0000 UTC m=+0.133025489 container attach eca7439a193d4f26dcbbbe4234cadb20658c9d2216401bd9d3c5e24ed6db45b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 12:55:45 np0005596060 podman[186105]: 2026-01-26 17:55:45.650682899 +0000 UTC m=+0.134744831 container died eca7439a193d4f26dcbbbe4234cadb20658c9d2216401bd9d3c5e24ed6db45b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Jan 26 12:55:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ce9e70178ece9effce658131f278513bbea67303ada6aa08436537d4e0fd3938-merged.mount: Deactivated successfully.
Jan 26 12:55:45 np0005596060 podman[186105]: 2026-01-26 17:55:45.702382855 +0000 UTC m=+0.186444757 container remove eca7439a193d4f26dcbbbe4234cadb20658c9d2216401bd9d3c5e24ed6db45b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:55:45 np0005596060 systemd[1]: libpod-conmon-eca7439a193d4f26dcbbbe4234cadb20658c9d2216401bd9d3c5e24ed6db45b5.scope: Deactivated successfully.
Jan 26 12:55:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:55:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:45.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:55:45 np0005596060 podman[186145]: 2026-01-26 17:55:45.887744233 +0000 UTC m=+0.067338075 container create 74c3402c0cce6afbd4dd9dfe0e8e64a80ac5f30b16a57072fcbe6d94c83fa8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:55:45 np0005596060 systemd[1]: Started libpod-conmon-74c3402c0cce6afbd4dd9dfe0e8e64a80ac5f30b16a57072fcbe6d94c83fa8a7.scope.
Jan 26 12:55:45 np0005596060 podman[186145]: 2026-01-26 17:55:45.853542773 +0000 UTC m=+0.033136705 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:55:45 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:55:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a45d576da76cdaef4f45884bf0979b396f2bb633128d506ba8d1a3563edacd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a45d576da76cdaef4f45884bf0979b396f2bb633128d506ba8d1a3563edacd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a45d576da76cdaef4f45884bf0979b396f2bb633128d506ba8d1a3563edacd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a45d576da76cdaef4f45884bf0979b396f2bb633128d506ba8d1a3563edacd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:46 np0005596060 podman[186145]: 2026-01-26 17:55:46.019474458 +0000 UTC m=+0.199068300 container init 74c3402c0cce6afbd4dd9dfe0e8e64a80ac5f30b16a57072fcbe6d94c83fa8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:55:46 np0005596060 podman[186145]: 2026-01-26 17:55:46.030371889 +0000 UTC m=+0.209965741 container start 74c3402c0cce6afbd4dd9dfe0e8e64a80ac5f30b16a57072fcbe6d94c83fa8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euclid, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 12:55:46 np0005596060 podman[186145]: 2026-01-26 17:55:46.034508472 +0000 UTC m=+0.214102324 container attach 74c3402c0cce6afbd4dd9dfe0e8e64a80ac5f30b16a57072fcbe6d94c83fa8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euclid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:55:46 np0005596060 podman[186161]: 2026-01-26 17:55:46.053353741 +0000 UTC m=+0.106539820 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 12:55:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]: {
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:    "1": [
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:        {
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "devices": [
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "/dev/loop3"
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            ],
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "lv_name": "ceph_lv0",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "lv_size": "7511998464",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "name": "ceph_lv0",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "tags": {
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.cluster_name": "ceph",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.crush_device_class": "",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.encrypted": "0",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.osd_id": "1",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.type": "block",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:                "ceph.vdo": "0"
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            },
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "type": "block",
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:            "vg_name": "ceph_vg0"
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:        }
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]:    ]
Jan 26 12:55:46 np0005596060 compassionate_euclid[186162]: }
Jan 26 12:55:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:46.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:46 np0005596060 systemd[1]: libpod-74c3402c0cce6afbd4dd9dfe0e8e64a80ac5f30b16a57072fcbe6d94c83fa8a7.scope: Deactivated successfully.
Jan 26 12:55:46 np0005596060 podman[186145]: 2026-01-26 17:55:46.794658961 +0000 UTC m=+0.974252803 container died 74c3402c0cce6afbd4dd9dfe0e8e64a80ac5f30b16a57072fcbe6d94c83fa8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:55:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8a45d576da76cdaef4f45884bf0979b396f2bb633128d506ba8d1a3563edacd4-merged.mount: Deactivated successfully.
Jan 26 12:55:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:47.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:48.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:49 np0005596060 podman[186145]: 2026-01-26 17:55:49.2248699 +0000 UTC m=+3.404463742 container remove 74c3402c0cce6afbd4dd9dfe0e8e64a80ac5f30b16a57072fcbe6d94c83fa8a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:55:49 np0005596060 systemd[1]: libpod-conmon-74c3402c0cce6afbd4dd9dfe0e8e64a80ac5f30b16a57072fcbe6d94c83fa8a7.scope: Deactivated successfully.
Jan 26 12:55:49 np0005596060 auditd[700]: Audit daemon rotating log files
Jan 26 12:55:49 np0005596060 podman[186202]: 2026-01-26 17:55:49.354424861 +0000 UTC m=+0.619962184 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 12:55:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:49 np0005596060 podman[186377]: 2026-01-26 17:55:49.903593975 +0000 UTC m=+0.025784972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:55:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:49.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:50 np0005596060 podman[186377]: 2026-01-26 17:55:50.598299787 +0000 UTC m=+0.720490774 container create 2534547bc5116d65841dffc110ed9de9183c61e61c6423d1e919d90412200fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 12:55:50 np0005596060 systemd[1]: Started libpod-conmon-2534547bc5116d65841dffc110ed9de9183c61e61c6423d1e919d90412200fc4.scope.
Jan 26 12:55:50 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:55:50 np0005596060 podman[186377]: 2026-01-26 17:55:50.741727653 +0000 UTC m=+0.863918600 container init 2534547bc5116d65841dffc110ed9de9183c61e61c6423d1e919d90412200fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shirley, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 12:55:50 np0005596060 podman[186377]: 2026-01-26 17:55:50.752511411 +0000 UTC m=+0.874702358 container start 2534547bc5116d65841dffc110ed9de9183c61e61c6423d1e919d90412200fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shirley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:55:50 np0005596060 podman[186377]: 2026-01-26 17:55:50.759760011 +0000 UTC m=+0.881951028 container attach 2534547bc5116d65841dffc110ed9de9183c61e61c6423d1e919d90412200fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shirley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 12:55:50 np0005596060 systemd[1]: libpod-2534547bc5116d65841dffc110ed9de9183c61e61c6423d1e919d90412200fc4.scope: Deactivated successfully.
Jan 26 12:55:50 np0005596060 frosty_shirley[186393]: 167 167
Jan 26 12:55:50 np0005596060 conmon[186393]: conmon 2534547bc5116d65841d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2534547bc5116d65841dffc110ed9de9183c61e61c6423d1e919d90412200fc4.scope/container/memory.events
Jan 26 12:55:50 np0005596060 podman[186377]: 2026-01-26 17:55:50.761887104 +0000 UTC m=+0.884078051 container died 2534547bc5116d65841dffc110ed9de9183c61e61c6423d1e919d90412200fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shirley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 12:55:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:50.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:50 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4c1cd17a85baaef8ef935258f2419c14a2089f164f2800187bc4250bd697750f-merged.mount: Deactivated successfully.
Jan 26 12:55:50 np0005596060 kernel: SELinux:  Converting 2778 SID table entries...
Jan 26 12:55:50 np0005596060 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 12:55:50 np0005596060 kernel: SELinux:  policy capability open_perms=1
Jan 26 12:55:50 np0005596060 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 12:55:50 np0005596060 kernel: SELinux:  policy capability always_check_network=0
Jan 26 12:55:50 np0005596060 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 12:55:50 np0005596060 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 12:55:50 np0005596060 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 12:55:51 np0005596060 podman[186377]: 2026-01-26 17:55:51.534495433 +0000 UTC m=+1.656686380 container remove 2534547bc5116d65841dffc110ed9de9183c61e61c6423d1e919d90412200fc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shirley, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 12:55:51 np0005596060 systemd[1]: libpod-conmon-2534547bc5116d65841dffc110ed9de9183c61e61c6423d1e919d90412200fc4.scope: Deactivated successfully.
Jan 26 12:55:51 np0005596060 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 26 12:55:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:51 np0005596060 podman[186421]: 2026-01-26 17:55:51.691147957 +0000 UTC m=+0.029202777 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:55:51 np0005596060 podman[186421]: 2026-01-26 17:55:51.822402309 +0000 UTC m=+0.160457099 container create 5c42f600476e64aad5a5a5568adb1b5832ef0b6a89e80dc2562982bcf653057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 12:55:51 np0005596060 systemd[1]: Started libpod-conmon-5c42f600476e64aad5a5a5568adb1b5832ef0b6a89e80dc2562982bcf653057b.scope.
Jan 26 12:55:52 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:55:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:52.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681b192e1a9c6ce83a83403f2ac996b4973a58c334ffd2495152a58a218f44cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681b192e1a9c6ce83a83403f2ac996b4973a58c334ffd2495152a58a218f44cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681b192e1a9c6ce83a83403f2ac996b4973a58c334ffd2495152a58a218f44cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681b192e1a9c6ce83a83403f2ac996b4973a58c334ffd2495152a58a218f44cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:55:52 np0005596060 podman[186421]: 2026-01-26 17:55:52.068842437 +0000 UTC m=+0.406897307 container init 5c42f600476e64aad5a5a5568adb1b5832ef0b6a89e80dc2562982bcf653057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 12:55:52 np0005596060 podman[186421]: 2026-01-26 17:55:52.081402119 +0000 UTC m=+0.419456939 container start 5c42f600476e64aad5a5a5568adb1b5832ef0b6a89e80dc2562982bcf653057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:55:52 np0005596060 podman[186421]: 2026-01-26 17:55:52.299584543 +0000 UTC m=+0.637639343 container attach 5c42f600476e64aad5a5a5568adb1b5832ef0b6a89e80dc2562982bcf653057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kare, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:55:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:52.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:53 np0005596060 festive_kare[186437]: {
Jan 26 12:55:53 np0005596060 festive_kare[186437]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:55:53 np0005596060 festive_kare[186437]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:55:53 np0005596060 festive_kare[186437]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:55:53 np0005596060 festive_kare[186437]:        "osd_id": 1,
Jan 26 12:55:53 np0005596060 festive_kare[186437]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:55:53 np0005596060 festive_kare[186437]:        "type": "bluestore"
Jan 26 12:55:53 np0005596060 festive_kare[186437]:    }
Jan 26 12:55:53 np0005596060 festive_kare[186437]: }
Jan 26 12:55:53 np0005596060 systemd[1]: libpod-5c42f600476e64aad5a5a5568adb1b5832ef0b6a89e80dc2562982bcf653057b.scope: Deactivated successfully.
Jan 26 12:55:53 np0005596060 podman[186421]: 2026-01-26 17:55:53.047758254 +0000 UTC m=+1.385813084 container died 5c42f600476e64aad5a5a5568adb1b5832ef0b6a89e80dc2562982bcf653057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 12:55:53 np0005596060 systemd[1]: var-lib-containers-storage-overlay-681b192e1a9c6ce83a83403f2ac996b4973a58c334ffd2495152a58a218f44cc-merged.mount: Deactivated successfully.
Jan 26 12:55:53 np0005596060 podman[186421]: 2026-01-26 17:55:53.127427095 +0000 UTC m=+1.465481885 container remove 5c42f600476e64aad5a5a5568adb1b5832ef0b6a89e80dc2562982bcf653057b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:55:53 np0005596060 systemd[1]: libpod-conmon-5c42f600476e64aad5a5a5568adb1b5832ef0b6a89e80dc2562982bcf653057b.scope: Deactivated successfully.
Jan 26 12:55:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:55:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:55:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:55:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:55:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:54.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:55:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:55:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4d38b84c-2aa2-4579-b1d0-80505cac45b1 does not exist
Jan 26 12:55:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9a744203-43c6-4a54-89e2-f77b32e12e7d does not exist
Jan 26 12:55:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 583f00d9-1d98-4950-83f3-4a8348070035 does not exist
Jan 26 12:55:54 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:55:54 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:55:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:54.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:55 np0005596060 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 26 12:55:55 np0005596060 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Jan 26 12:55:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:56.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:55:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:55:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:56.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:55:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:55:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:55:58.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:55:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:55:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:55:58.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:55:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:00.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:00.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:02.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:02.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:56:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:04.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:04.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:05 np0005596060 systemd[1]: Stopping OpenSSH server daemon...
Jan 26 12:56:05 np0005596060 systemd[1]: sshd.service: Deactivated successfully.
Jan 26 12:56:05 np0005596060 systemd[1]: Stopped OpenSSH server daemon.
Jan 26 12:56:05 np0005596060 systemd[1]: sshd.service: Consumed 2.510s CPU time, read 32.0K from disk, written 0B to disk.
Jan 26 12:56:05 np0005596060 systemd[1]: Stopped target sshd-keygen.target.
Jan 26 12:56:05 np0005596060 systemd[1]: Stopping sshd-keygen.target...
Jan 26 12:56:05 np0005596060 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 12:56:05 np0005596060 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 12:56:05 np0005596060 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 12:56:05 np0005596060 systemd[1]: Reached target sshd-keygen.target.
Jan 26 12:56:05 np0005596060 systemd[1]: Starting OpenSSH server daemon...
Jan 26 12:56:05 np0005596060 systemd[1]: Started OpenSSH server daemon.
Jan 26 12:56:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:06.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:06.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:07 np0005596060 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 12:56:07 np0005596060 systemd[1]: Starting man-db-cache-update.service...
Jan 26 12:56:07 np0005596060 systemd[1]: Reloading.
Jan 26 12:56:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:08.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:08 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:56:08 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:56:08 np0005596060 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 12:56:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:08.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:10.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:10.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:12.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:12.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:13 np0005596060 python3.9[192290]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 12:56:13 np0005596060 systemd[1]: Reloading.
Jan 26 12:56:13 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:56:13 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:56:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:14.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:56:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:56:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:56:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:56:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:56:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:56:14 np0005596060 python3.9[193603]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 12:56:14 np0005596060 systemd[1]: Reloading.
Jan 26 12:56:14 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:56:14 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:56:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:56:14.725 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 12:56:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:56:14.727 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 12:56:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:56:14.727 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 12:56:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:14.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:15 np0005596060 python3.9[194945]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 12:56:15 np0005596060 systemd[1]: Reloading.
Jan 26 12:56:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:15 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:56:15 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:56:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:16.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:16 np0005596060 podman[196170]: 2026-01-26 17:56:16.303213487 +0000 UTC m=+0.067270003 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 26 12:56:16 np0005596060 python3.9[196291]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 12:56:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:16 np0005596060 systemd[1]: Reloading.
Jan 26 12:56:16 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:56:16 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:56:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:16.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:17 np0005596060 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 12:56:17 np0005596060 systemd[1]: Finished man-db-cache-update.service.
Jan 26 12:56:17 np0005596060 systemd[1]: man-db-cache-update.service: Consumed 11.551s CPU time.
Jan 26 12:56:17 np0005596060 systemd[1]: run-ra1072c0e778644f08f8a51f19aec683d.service: Deactivated successfully.
Jan 26 12:56:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:18.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:18.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:19 np0005596060 python3.9[197014]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:19 np0005596060 systemd[1]: Reloading.
Jan 26 12:56:19 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:56:19 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:56:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:19 np0005596060 podman[197053]: 2026-01-26 17:56:19.787080673 +0000 UTC m=+0.171881124 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 12:56:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:20.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:20 np0005596060 python3.9[197281]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:20 np0005596060 systemd[1]: Reloading.
Jan 26 12:56:20 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:56:20 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:56:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:20.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:21 np0005596060 python3.9[197471]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:21 np0005596060 systemd[1]: Reloading.
Jan 26 12:56:21 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:56:21 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:56:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:22.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:56:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:22.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:56:22 np0005596060 python3.9[197663]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:23 np0005596060 python3.9[197819]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:23 np0005596060 systemd[1]: Reloading.
Jan 26 12:56:23 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:56:23 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:56:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:24.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:56:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:24.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:56:25 np0005596060 python3.9[198009]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 12:56:25 np0005596060 systemd[1]: Reloading.
Jan 26 12:56:25 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:56:25 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:56:25 np0005596060 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 26 12:56:25 np0005596060 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 26 12:56:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:26.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:26 np0005596060 python3.9[198203]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:26.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:27 np0005596060 python3.9[198358]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:28.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:28 np0005596060 python3.9[198514]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:29 np0005596060 python3.9[198669]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:30.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:30 np0005596060 python3.9[198825]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:30.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:31 np0005596060 python3.9[198980]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:31 np0005596060 python3.9[199136]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:32.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:32.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:32 np0005596060 python3.9[199291]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:33 np0005596060 python3.9[199447]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:34.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:34 np0005596060 python3.9[199602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:34.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:35 np0005596060 python3.9[199757]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:56:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:56:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:36 np0005596060 python3.9[199913]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:36.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:37 np0005596060 python3.9[200068]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:38 np0005596060 python3.9[200224]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 12:56:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:38.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:40.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:41 np0005596060 python3.9[200431]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:56:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:42.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:42 np0005596060 python3.9[200583]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:56:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:42.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:42 np0005596060 python3.9[200735]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:56:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:56:44
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'backups', 'volumes', '.mgr']
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:56:44 np0005596060 python3.9[200888]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:56:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:44.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:56:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:56:44 np0005596060 python3.9[201040]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:56:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:44.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:46 np0005596060 python3.9[201193]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 12:56:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:46.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:46 np0005596060 podman[201317]: 2026-01-26 17:56:46.663284314 +0000 UTC m=+0.079926948 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 12:56:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:46.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:46 np0005596060 python3.9[201350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:56:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:48.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:48 np0005596060 python3.9[201515]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:56:48 np0005596060 python3.9[201640]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769450207.1735752-1646-111608231749635/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:56:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:48.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:49 np0005596060 python3.9[201792]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:56:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:50 np0005596060 python3.9[201919]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769450208.930754-1646-34019963046754/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:56:50 np0005596060 podman[201918]: 2026-01-26 17:56:50.057773348 +0000 UTC m=+0.166551762 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 12:56:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:56:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:50.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:56:50 np0005596060 python3.9[202096]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:56:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:50.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:51 np0005596060 python3.9[202221]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769450210.219443-1646-182645125427935/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:56:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:52.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:52 np0005596060 python3.9[202374]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:56:52 np0005596060 python3.9[202499]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769450211.7206647-1646-131258239434680/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:56:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:52.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:53 np0005596060 python3.9[202651]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:56:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:53 np0005596060 python3.9[202777]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769450212.912738-1646-143658605434729/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:56:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:54.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:54 np0005596060 python3.9[202929]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:56:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:54.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:55 np0005596060 python3.9[203129]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769450214.1808507-1646-26257851202473/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:56:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:55 np0005596060 podman[203311]: 2026-01-26 17:56:55.790957496 +0000 UTC m=+0.243546186 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:56:55 np0005596060 podman[203311]: 2026-01-26 17:56:55.891848094 +0000 UTC m=+0.344436764 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 12:56:55 np0005596060 python3.9[203394]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:56:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:56.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 12:56:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:56:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 12:56:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:56:56 np0005596060 python3.9[203576]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769450215.419903-1646-42723132180957/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:56:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:56:56 np0005596060 podman[203719]: 2026-01-26 17:56:56.790613719 +0000 UTC m=+0.062697890 container exec e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:56:56 np0005596060 podman[203719]: 2026-01-26 17:56:56.797132771 +0000 UTC m=+0.069216922 container exec_died e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 12:56:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:56:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:56.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:56:57 np0005596060 python3.9[203872]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:56:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:56:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:56:57 np0005596060 podman[203869]: 2026-01-26 17:56:57.395893407 +0000 UTC m=+0.438879532 container exec 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.28.2, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, com.redhat.component=keepalived-container, distribution-scope=public, architecture=x86_64, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 26 12:56:57 np0005596060 podman[203869]: 2026-01-26 17:56:57.596644578 +0000 UTC m=+0.639630603 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, architecture=x86_64, version=2.2.4, description=keepalived for Ceph, io.buildah.version=1.28.2, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=Ceph keepalived)
Jan 26 12:56:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:56:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:56:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:56:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:56:57 np0005596060 python3.9[204029]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769450216.6698012-1646-223368469085427/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:56:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:56:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:56:58 np0005596060 python3.9[204299]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:56:58 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ea61bb0e-8190-4c8f-a7e9-d4511ddb1e06 does not exist
Jan 26 12:56:58 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 1b90110c-2eb5-455a-aca0-ee040257892b does not exist
Jan 26 12:56:58 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 79d0b9a0-3ec9-458f-a2bf-2d64c60062a7 does not exist
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:56:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:56:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:56:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:56:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:56:58.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:56:59 np0005596060 podman[204543]: 2026-01-26 17:56:59.268520795 +0000 UTC m=+0.050981339 container create 305825cef4ca42f5a7c5b86dd4f536e02dd43916efca47e95066f754fbc7b671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 12:56:59 np0005596060 systemd[1]: Started libpod-conmon-305825cef4ca42f5a7c5b86dd4f536e02dd43916efca47e95066f754fbc7b671.scope.
Jan 26 12:56:59 np0005596060 podman[204543]: 2026-01-26 17:56:59.244252321 +0000 UTC m=+0.026712925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:56:59 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:56:59 np0005596060 podman[204543]: 2026-01-26 17:56:59.466049806 +0000 UTC m=+0.248510380 container init 305825cef4ca42f5a7c5b86dd4f536e02dd43916efca47e95066f754fbc7b671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 12:56:59 np0005596060 podman[204543]: 2026-01-26 17:56:59.476126256 +0000 UTC m=+0.258586810 container start 305825cef4ca42f5a7c5b86dd4f536e02dd43916efca47e95066f754fbc7b671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:56:59 np0005596060 podman[204543]: 2026-01-26 17:56:59.479870219 +0000 UTC m=+0.262330773 container attach 305825cef4ca42f5a7c5b86dd4f536e02dd43916efca47e95066f754fbc7b671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:56:59 np0005596060 cool_ishizaka[204594]: 167 167
Jan 26 12:56:59 np0005596060 systemd[1]: libpod-305825cef4ca42f5a7c5b86dd4f536e02dd43916efca47e95066f754fbc7b671.scope: Deactivated successfully.
Jan 26 12:56:59 np0005596060 podman[204543]: 2026-01-26 17:56:59.485460328 +0000 UTC m=+0.267920882 container died 305825cef4ca42f5a7c5b86dd4f536e02dd43916efca47e95066f754fbc7b671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 12:56:59 np0005596060 systemd[1]: var-lib-containers-storage-overlay-11d29c0c023ed3980cf923f90a8fb65123e6e54732ac0e0889e313b13368c6b1-merged.mount: Deactivated successfully.
Jan 26 12:56:59 np0005596060 python3.9[204626]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:56:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:56:59 np0005596060 podman[204543]: 2026-01-26 17:56:59.708569775 +0000 UTC m=+0.491030319 container remove 305825cef4ca42f5a7c5b86dd4f536e02dd43916efca47e95066f754fbc7b671 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_ishizaka, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:56:59 np0005596060 systemd[1]: libpod-conmon-305825cef4ca42f5a7c5b86dd4f536e02dd43916efca47e95066f754fbc7b671.scope: Deactivated successfully.
Jan 26 12:56:59 np0005596060 podman[204719]: 2026-01-26 17:56:59.963225686 +0000 UTC m=+0.107284359 container create 6258ea2afba5dbfa0b5a1d354f40f6958de38172de5ce69d972ed3a4f4e0abc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ganguly, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:56:59 np0005596060 podman[204719]: 2026-01-26 17:56:59.882839137 +0000 UTC m=+0.026897820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:57:00 np0005596060 systemd[1]: Started libpod-conmon-6258ea2afba5dbfa0b5a1d354f40f6958de38172de5ce69d972ed3a4f4e0abc0.scope.
Jan 26 12:57:00 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:57:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec1a540394b17d02d5b1fc652ecfcddc3d489e3aefd6579d418e06d2912cb0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec1a540394b17d02d5b1fc652ecfcddc3d489e3aefd6579d418e06d2912cb0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec1a540394b17d02d5b1fc652ecfcddc3d489e3aefd6579d418e06d2912cb0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec1a540394b17d02d5b1fc652ecfcddc3d489e3aefd6579d418e06d2912cb0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec1a540394b17d02d5b1fc652ecfcddc3d489e3aefd6579d418e06d2912cb0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:00.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:00 np0005596060 python3.9[204817]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:00 np0005596060 podman[204719]: 2026-01-26 17:57:00.307965687 +0000 UTC m=+0.452024340 container init 6258ea2afba5dbfa0b5a1d354f40f6958de38172de5ce69d972ed3a4f4e0abc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ganguly, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 12:57:00 np0005596060 podman[204719]: 2026-01-26 17:57:00.321968855 +0000 UTC m=+0.466027538 container start 6258ea2afba5dbfa0b5a1d354f40f6958de38172de5ce69d972ed3a4f4e0abc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 12:57:00 np0005596060 podman[204719]: 2026-01-26 17:57:00.32701482 +0000 UTC m=+0.471073583 container attach 6258ea2afba5dbfa0b5a1d354f40f6958de38172de5ce69d972ed3a4f4e0abc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 12:57:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:00.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:01 np0005596060 python3.9[205021]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:01 np0005596060 frosty_ganguly[204813]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:57:01 np0005596060 frosty_ganguly[204813]: --> relative data size: 1.0
Jan 26 12:57:01 np0005596060 frosty_ganguly[204813]: --> All data devices are unavailable
Jan 26 12:57:01 np0005596060 systemd[1]: libpod-6258ea2afba5dbfa0b5a1d354f40f6958de38172de5ce69d972ed3a4f4e0abc0.scope: Deactivated successfully.
Jan 26 12:57:01 np0005596060 podman[204719]: 2026-01-26 17:57:01.1640086 +0000 UTC m=+1.308067263 container died 6258ea2afba5dbfa0b5a1d354f40f6958de38172de5ce69d972ed3a4f4e0abc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:57:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:01 np0005596060 python3.9[205193]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:57:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:02.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:57:02 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 26 12:57:02 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:02.223505) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 12:57:02 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 26 12:57:02 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450222223675, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2794, "num_deletes": 503, "total_data_size": 5018170, "memory_usage": 5078704, "flush_reason": "Manual Compaction"}
Jan 26 12:57:02 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 26 12:57:02 np0005596060 python3.9[205345]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:02 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450222753642, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 4918671, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12349, "largest_seqno": 15142, "table_properties": {"data_size": 4906643, "index_size": 7562, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 25257, "raw_average_key_size": 18, "raw_value_size": 4880985, "raw_average_value_size": 3623, "num_data_blocks": 337, "num_entries": 1347, "num_filter_entries": 1347, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449922, "oldest_key_time": 1769449922, "file_creation_time": 1769450222, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:57:02 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 530325 microseconds, and 20542 cpu microseconds.
Jan 26 12:57:02 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:57:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:02.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:02.753856) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 4918671 bytes OK
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:02.753913) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:03.318870) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:03.318929) EVENT_LOG_v1 {"time_micros": 1769450223318915, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:03.318959) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 5006233, prev total WAL file size 5038006, number of live WAL files 2.
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:03.322361) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(4803KB)], [29(8605KB)]
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450223322519, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 13731184, "oldest_snapshot_seqno": -1}
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:57:03 np0005596060 python3.9[205498]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4334 keys, 11262772 bytes, temperature: kUnknown
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450223927058, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 11262772, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11228022, "index_size": 22800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 106513, "raw_average_key_size": 24, "raw_value_size": 11144023, "raw_average_value_size": 2571, "num_data_blocks": 965, "num_entries": 4334, "num_filter_entries": 4334, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769450223, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:57:03 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:57:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:04.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-5ec1a540394b17d02d5b1fc652ecfcddc3d489e3aefd6579d418e06d2912cb0c-merged.mount: Deactivated successfully.
Jan 26 12:57:04 np0005596060 python3.9[205651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:03.927450) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11262772 bytes
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:04.493156) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 22.7 rd, 18.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.7, 8.4 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 5359, records dropped: 1025 output_compression: NoCompression
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:04.493256) EVENT_LOG_v1 {"time_micros": 1769450224493222, "job": 12, "event": "compaction_finished", "compaction_time_micros": 604684, "compaction_time_cpu_micros": 41167, "output_level": 6, "num_output_files": 1, "total_output_size": 11262772, "num_input_records": 5359, "num_output_records": 4334, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450224494855, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450224497515, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:03.322125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:04.497642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:04.497653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:04.497657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:04.497660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:04 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:04.497663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:04.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:04 np0005596060 python3.9[205803]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:05 np0005596060 podman[204719]: 2026-01-26 17:57:05.088418938 +0000 UTC m=+5.232477601 container remove 6258ea2afba5dbfa0b5a1d354f40f6958de38172de5ce69d972ed3a4f4e0abc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 12:57:05 np0005596060 systemd[1]: libpod-conmon-6258ea2afba5dbfa0b5a1d354f40f6958de38172de5ce69d972ed3a4f4e0abc0.scope: Deactivated successfully.
Jan 26 12:57:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:05 np0005596060 python3.9[206055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:05 np0005596060 podman[206124]: 2026-01-26 17:57:05.78835586 +0000 UTC m=+0.027896615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:57:05 np0005596060 podman[206124]: 2026-01-26 17:57:05.92309419 +0000 UTC m=+0.162634925 container create d30a0c6d748d3c38535798cb019fc5c19fbdfaa5ee421bdf0fec5f179ff94f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_vaughan, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:57:05 np0005596060 systemd[1]: Started libpod-conmon-d30a0c6d748d3c38535798cb019fc5c19fbdfaa5ee421bdf0fec5f179ff94f8b.scope.
Jan 26 12:57:06 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:57:06 np0005596060 podman[206124]: 2026-01-26 17:57:06.103240369 +0000 UTC m=+0.342781134 container init d30a0c6d748d3c38535798cb019fc5c19fbdfaa5ee421bdf0fec5f179ff94f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_vaughan, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 12:57:06 np0005596060 podman[206124]: 2026-01-26 17:57:06.115263298 +0000 UTC m=+0.354804033 container start d30a0c6d748d3c38535798cb019fc5c19fbdfaa5ee421bdf0fec5f179ff94f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:57:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:06 np0005596060 admiring_vaughan[206234]: 167 167
Jan 26 12:57:06 np0005596060 systemd[1]: libpod-d30a0c6d748d3c38535798cb019fc5c19fbdfaa5ee421bdf0fec5f179ff94f8b.scope: Deactivated successfully.
Jan 26 12:57:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:06.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:06 np0005596060 podman[206124]: 2026-01-26 17:57:06.223258523 +0000 UTC m=+0.462799288 container attach d30a0c6d748d3c38535798cb019fc5c19fbdfaa5ee421bdf0fec5f179ff94f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 12:57:06 np0005596060 podman[206124]: 2026-01-26 17:57:06.224681838 +0000 UTC m=+0.464222593 container died d30a0c6d748d3c38535798cb019fc5c19fbdfaa5ee421bdf0fec5f179ff94f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:57:06 np0005596060 python3.9[206265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:06 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d0326b14cbe2328e3a4a1359b2c66a9f5f8e10eebbb432717a9bc53cafa39898-merged.mount: Deactivated successfully.
Jan 26 12:57:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:06.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:06 np0005596060 python3.9[206434]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:07 np0005596060 podman[206124]: 2026-01-26 17:57:07.239709713 +0000 UTC m=+1.479250458 container remove d30a0c6d748d3c38535798cb019fc5c19fbdfaa5ee421bdf0fec5f179ff94f8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:57:07 np0005596060 systemd[1]: libpod-conmon-d30a0c6d748d3c38535798cb019fc5c19fbdfaa5ee421bdf0fec5f179ff94f8b.scope: Deactivated successfully.
Jan 26 12:57:07 np0005596060 podman[206566]: 2026-01-26 17:57:07.437676315 +0000 UTC m=+0.041802041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:57:07 np0005596060 podman[206566]: 2026-01-26 17:57:07.558310224 +0000 UTC m=+0.162435960 container create d9e93c73b7ca047285be82bad2f2c221efe7628b002dd39093ee847ec6def639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:57:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:07 np0005596060 systemd[1]: Started libpod-conmon-d9e93c73b7ca047285be82bad2f2c221efe7628b002dd39093ee847ec6def639.scope.
Jan 26 12:57:07 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:57:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49a67982b4474b81515e543901b2cd0a95a7f3d7b82fc3eadffa1a44da2d268/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49a67982b4474b81515e543901b2cd0a95a7f3d7b82fc3eadffa1a44da2d268/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49a67982b4474b81515e543901b2cd0a95a7f3d7b82fc3eadffa1a44da2d268/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b49a67982b4474b81515e543901b2cd0a95a7f3d7b82fc3eadffa1a44da2d268/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:07 np0005596060 python3.9[206608]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:07 np0005596060 podman[206566]: 2026-01-26 17:57:07.938541327 +0000 UTC m=+0.542667073 container init d9e93c73b7ca047285be82bad2f2c221efe7628b002dd39093ee847ec6def639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 12:57:07 np0005596060 podman[206566]: 2026-01-26 17:57:07.949540931 +0000 UTC m=+0.553666627 container start d9e93c73b7ca047285be82bad2f2c221efe7628b002dd39093ee847ec6def639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:57:08 np0005596060 podman[206566]: 2026-01-26 17:57:08.013358877 +0000 UTC m=+0.617484573 container attach d9e93c73b7ca047285be82bad2f2c221efe7628b002dd39093ee847ec6def639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 12:57:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:08.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:08 np0005596060 python3.9[206767]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]: {
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:    "1": [
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:        {
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "devices": [
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "/dev/loop3"
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            ],
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "lv_name": "ceph_lv0",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "lv_size": "7511998464",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "name": "ceph_lv0",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "tags": {
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.cluster_name": "ceph",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.crush_device_class": "",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.encrypted": "0",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.osd_id": "1",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.type": "block",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:                "ceph.vdo": "0"
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            },
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "type": "block",
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:            "vg_name": "ceph_vg0"
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:        }
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]:    ]
Jan 26 12:57:08 np0005596060 friendly_diffie[206611]: }
Jan 26 12:57:08 np0005596060 systemd[1]: libpod-d9e93c73b7ca047285be82bad2f2c221efe7628b002dd39093ee847ec6def639.scope: Deactivated successfully.
Jan 26 12:57:08 np0005596060 podman[206566]: 2026-01-26 17:57:08.733407499 +0000 UTC m=+1.337533195 container died d9e93c73b7ca047285be82bad2f2c221efe7628b002dd39093ee847ec6def639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 12:57:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:08.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:09 np0005596060 python3.9[206934]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b49a67982b4474b81515e543901b2cd0a95a7f3d7b82fc3eadffa1a44da2d268-merged.mount: Deactivated successfully.
Jan 26 12:57:10 np0005596060 python3.9[207088]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:10.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:10 np0005596060 podman[206566]: 2026-01-26 17:57:10.53784974 +0000 UTC m=+3.141975436 container remove d9e93c73b7ca047285be82bad2f2c221efe7628b002dd39093ee847ec6def639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 12:57:10 np0005596060 python3.9[207211]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450229.436761-2309-244763763374123/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:10 np0005596060 systemd[1]: libpod-conmon-d9e93c73b7ca047285be82bad2f2c221efe7628b002dd39093ee847ec6def639.scope: Deactivated successfully.
Jan 26 12:57:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:57:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:10.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:57:11 np0005596060 python3.9[207487]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:11 np0005596060 podman[207504]: 2026-01-26 17:57:11.297558178 +0000 UTC m=+0.095647429 container create 0e89fcb323034ecfce6091408368acf973f593d4ffe22291ef09a98ee54ce45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 26 12:57:11 np0005596060 podman[207504]: 2026-01-26 17:57:11.231441475 +0000 UTC m=+0.029530706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:57:11 np0005596060 systemd[1]: Started libpod-conmon-0e89fcb323034ecfce6091408368acf973f593d4ffe22291ef09a98ee54ce45f.scope.
Jan 26 12:57:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:57:11 np0005596060 podman[207504]: 2026-01-26 17:57:11.395326759 +0000 UTC m=+0.193416030 container init 0e89fcb323034ecfce6091408368acf973f593d4ffe22291ef09a98ee54ce45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 12:57:11 np0005596060 podman[207504]: 2026-01-26 17:57:11.406879326 +0000 UTC m=+0.204968567 container start 0e89fcb323034ecfce6091408368acf973f593d4ffe22291ef09a98ee54ce45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:57:11 np0005596060 podman[207504]: 2026-01-26 17:57:11.411211144 +0000 UTC m=+0.209300355 container attach 0e89fcb323034ecfce6091408368acf973f593d4ffe22291ef09a98ee54ce45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:57:11 np0005596060 zen_brahmagupta[207527]: 167 167
Jan 26 12:57:11 np0005596060 systemd[1]: libpod-0e89fcb323034ecfce6091408368acf973f593d4ffe22291ef09a98ee54ce45f.scope: Deactivated successfully.
Jan 26 12:57:11 np0005596060 podman[207504]: 2026-01-26 17:57:11.415522481 +0000 UTC m=+0.213611702 container died 0e89fcb323034ecfce6091408368acf973f593d4ffe22291ef09a98ee54ce45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 26 12:57:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ff6058f98a1a8ad7916e23034c3b33ac85787f4608b39bbf55716424ab6c9c1d-merged.mount: Deactivated successfully.
Jan 26 12:57:11 np0005596060 podman[207504]: 2026-01-26 17:57:11.504607706 +0000 UTC m=+0.302696917 container remove 0e89fcb323034ecfce6091408368acf973f593d4ffe22291ef09a98ee54ce45f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_brahmagupta, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 12:57:11 np0005596060 systemd[1]: libpod-conmon-0e89fcb323034ecfce6091408368acf973f593d4ffe22291ef09a98ee54ce45f.scope: Deactivated successfully.
Jan 26 12:57:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:11 np0005596060 podman[207655]: 2026-01-26 17:57:11.685499573 +0000 UTC m=+0.054825094 container create cf9c344038bb962bc5e14bbb5182d7550e9d131ea6e58bc716bce8c952e6644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_clarke, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:57:11 np0005596060 systemd[1]: Started libpod-conmon-cf9c344038bb962bc5e14bbb5182d7550e9d131ea6e58bc716bce8c952e6644e.scope.
Jan 26 12:57:11 np0005596060 podman[207655]: 2026-01-26 17:57:11.659340043 +0000 UTC m=+0.028665564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:57:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:57:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e74d294e752b13e78a837bf6919bbdfa4c84415742ca64530f153d141cf74e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e74d294e752b13e78a837bf6919bbdfa4c84415742ca64530f153d141cf74e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e74d294e752b13e78a837bf6919bbdfa4c84415742ca64530f153d141cf74e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:11 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e74d294e752b13e78a837bf6919bbdfa4c84415742ca64530f153d141cf74e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:57:11 np0005596060 podman[207655]: 2026-01-26 17:57:11.805043666 +0000 UTC m=+0.174369257 container init cf9c344038bb962bc5e14bbb5182d7550e9d131ea6e58bc716bce8c952e6644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 12:57:11 np0005596060 podman[207655]: 2026-01-26 17:57:11.812099431 +0000 UTC m=+0.181424942 container start cf9c344038bb962bc5e14bbb5182d7550e9d131ea6e58bc716bce8c952e6644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 26 12:57:11 np0005596060 python3.9[207681]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450230.789383-2309-11721243032818/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:11 np0005596060 podman[207655]: 2026-01-26 17:57:11.937063758 +0000 UTC m=+0.306389309 container attach cf9c344038bb962bc5e14bbb5182d7550e9d131ea6e58bc716bce8c952e6644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 12:57:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:12.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:12 np0005596060 python3.9[207840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:12 np0005596060 vigilant_clarke[207684]: {
Jan 26 12:57:12 np0005596060 vigilant_clarke[207684]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:57:12 np0005596060 vigilant_clarke[207684]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:57:12 np0005596060 vigilant_clarke[207684]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:57:12 np0005596060 vigilant_clarke[207684]:        "osd_id": 1,
Jan 26 12:57:12 np0005596060 vigilant_clarke[207684]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:57:12 np0005596060 vigilant_clarke[207684]:        "type": "bluestore"
Jan 26 12:57:12 np0005596060 vigilant_clarke[207684]:    }
Jan 26 12:57:12 np0005596060 vigilant_clarke[207684]: }
Jan 26 12:57:12 np0005596060 systemd[1]: libpod-cf9c344038bb962bc5e14bbb5182d7550e9d131ea6e58bc716bce8c952e6644e.scope: Deactivated successfully.
Jan 26 12:57:12 np0005596060 podman[207655]: 2026-01-26 17:57:12.714762563 +0000 UTC m=+1.084088174 container died cf9c344038bb962bc5e14bbb5182d7550e9d131ea6e58bc716bce8c952e6644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_clarke, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 12:57:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:12.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:12 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3e74d294e752b13e78a837bf6919bbdfa4c84415742ca64530f153d141cf74e7-merged.mount: Deactivated successfully.
Jan 26 12:57:13 np0005596060 podman[207655]: 2026-01-26 17:57:13.084352172 +0000 UTC m=+1.453677733 container remove cf9c344038bb962bc5e14bbb5182d7550e9d131ea6e58bc716bce8c952e6644e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 12:57:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:57:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:57:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:57:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:57:13 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6b299126-92d2-483e-9675-9665199dd0af does not exist
Jan 26 12:57:13 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev f6165ec0-b448-45b8-8a8a-af32708dbe45 does not exist
Jan 26 12:57:13 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 69e3c227-5d81-4db0-b6a9-d71a8e3e62de does not exist
Jan 26 12:57:13 np0005596060 systemd[1]: libpod-conmon-cf9c344038bb962bc5e14bbb5182d7550e9d131ea6e58bc716bce8c952e6644e.scope: Deactivated successfully.
Jan 26 12:57:13 np0005596060 python3.9[207992]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450232.0728369-2309-197573840541538/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:13 np0005596060 python3.9[208195]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:57:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:57:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:57:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:57:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:57:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:57:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:14.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:57:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:57:14 np0005596060 python3.9[208318]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450233.4213383-2309-247730247111450/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:57:14.727 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 12:57:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:57:14.728 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 12:57:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:57:14.729 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 12:57:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:14.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:15 np0005596060 python3.9[208470]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:15 np0005596060 python3.9[208594]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450234.654567-2309-106337870754993/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:16.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:16 np0005596060 python3.9[208746]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:16 np0005596060 podman[208793]: 2026-01-26 17:57:16.825138885 +0000 UTC m=+0.073257352 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 12:57:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:16.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:17 np0005596060 python3.9[208889]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450236.086232-2309-181538801262109/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:17 np0005596060 python3.9[209042]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:18.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:18 np0005596060 python3.9[209165]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450237.3426769-2309-49130919434992/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 12:57:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:18.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 12:57:19 np0005596060 python3.9[209317]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:19 np0005596060 python3.9[209441]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450238.5939848-2309-237093078629210/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:20.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:20 np0005596060 python3.9[209593]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:20 np0005596060 podman[209712]: 2026-01-26 17:57:20.650290435 +0000 UTC m=+0.089955147 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 12:57:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:20.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:20 np0005596060 python3.9[209785]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450239.755299-2309-255148037679115/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:21 np0005596060 python3.9[209943]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:22 np0005596060 python3.9[210066]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450241.0851781-2309-66376503289604/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:22.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:22 np0005596060 python3.9[210218]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:22.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:23 np0005596060 python3.9[210341]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450242.2865949-2309-11812339747646/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:23 np0005596060 python3.9[210494]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:24.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:24 np0005596060 python3.9[210617]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450243.5084178-2309-212574724636862/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:24.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:25 np0005596060 python3.9[210769]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:25 np0005596060 python3.9[210893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450244.742967-2309-102757646808452/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:26.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:26 np0005596060 python3.9[211045]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:26.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:27 np0005596060 python3.9[211168]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450245.9914246-2309-219707460933920/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:27 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 26 12:57:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:27.414776) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 12:57:27 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 26 12:57:27 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450247414838, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 458, "num_deletes": 252, "total_data_size": 479163, "memory_usage": 488704, "flush_reason": "Manual Compaction"}
Jan 26 12:57:27 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 26 12:57:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:27 np0005596060 python3.9[211319]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450248030521, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 367672, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15143, "largest_seqno": 15600, "table_properties": {"data_size": 365175, "index_size": 597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6471, "raw_average_key_size": 19, "raw_value_size": 360063, "raw_average_value_size": 1094, "num_data_blocks": 26, "num_entries": 329, "num_filter_entries": 329, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769450222, "oldest_key_time": 1769450222, "file_creation_time": 1769450247, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 615822 microseconds, and 2252 cpu microseconds.
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:57:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:28.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.030589) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 367672 bytes OK
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.030620) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.219445) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.219523) EVENT_LOG_v1 {"time_micros": 1769450248219507, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.219560) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 476437, prev total WAL file size 476437, number of live WAL files 2.
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.220466) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(359KB)], [32(10MB)]
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450248220524, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 11630444, "oldest_snapshot_seqno": -1}
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4154 keys, 7872519 bytes, temperature: kUnknown
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450248843791, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7872519, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7843292, "index_size": 17725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 103164, "raw_average_key_size": 24, "raw_value_size": 7766711, "raw_average_value_size": 1869, "num_data_blocks": 744, "num_entries": 4154, "num_filter_entries": 4154, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769450248, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.844300) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7872519 bytes
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.883234) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 18.7 rd, 12.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.7 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(53.0) write-amplify(21.4) OK, records in: 4663, records dropped: 509 output_compression: NoCompression
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.883292) EVENT_LOG_v1 {"time_micros": 1769450248883272, "job": 14, "event": "compaction_finished", "compaction_time_micros": 623402, "compaction_time_cpu_micros": 28248, "output_level": 6, "num_output_files": 1, "total_output_size": 7872519, "num_input_records": 4663, "num_output_records": 4154, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450248883650, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450248886693, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.220285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.886773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.886777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.886779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.886780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:28 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-17:57:28.886781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 12:57:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:28.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:29 np0005596060 python3.9[211474]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 26 12:57:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:30.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:30.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:31 np0005596060 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 26 12:57:31 np0005596060 python3.9[211632]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:32.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:32 np0005596060 python3.9[211784]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:32.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:33 np0005596060 python3.9[211936]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:34.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:34 np0005596060 python3.9[212089]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:34.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:34 np0005596060 python3.9[212241]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:36 np0005596060 python3.9[212394]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:36.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:36 np0005596060 python3.9[212546]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:36.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:37 np0005596060 python3.9[212698]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:38 np0005596060 python3.9[212851]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:38.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:38 np0005596060 python3.9[213003]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:38.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:39 np0005596060 python3.9[213156]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:57:39 np0005596060 systemd[1]: Reloading.
Jan 26 12:57:39 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:57:39 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:57:40 np0005596060 systemd[1]: Starting libvirt logging daemon socket...
Jan 26 12:57:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:57:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:40.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:57:40 np0005596060 systemd[1]: Listening on libvirt logging daemon socket.
Jan 26 12:57:40 np0005596060 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 26 12:57:40 np0005596060 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 26 12:57:40 np0005596060 systemd[1]: Starting libvirt logging daemon...
Jan 26 12:57:40 np0005596060 systemd[1]: Started libvirt logging daemon.
Jan 26 12:57:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:40.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:41 np0005596060 python3.9[213399]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:57:41 np0005596060 systemd[1]: Reloading.
Jan 26 12:57:41 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:57:41 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:57:41 np0005596060 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 26 12:57:41 np0005596060 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 26 12:57:41 np0005596060 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 26 12:57:41 np0005596060 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 26 12:57:41 np0005596060 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 26 12:57:41 np0005596060 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 26 12:57:41 np0005596060 systemd[1]: Starting libvirt nodedev daemon...
Jan 26 12:57:41 np0005596060 systemd[1]: Started libvirt nodedev daemon.
Jan 26 12:57:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:42 np0005596060 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 26 12:57:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:42.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:42 np0005596060 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 26 12:57:42 np0005596060 python3.9[213617]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:57:42 np0005596060 systemd[1]: Reloading.
Jan 26 12:57:42 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:57:42 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:57:42 np0005596060 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 26 12:57:42 np0005596060 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 26 12:57:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:42.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:42 np0005596060 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 26 12:57:42 np0005596060 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 26 12:57:42 np0005596060 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 26 12:57:42 np0005596060 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 26 12:57:42 np0005596060 systemd[1]: Starting libvirt proxy daemon...
Jan 26 12:57:43 np0005596060 systemd[1]: Started libvirt proxy daemon.
Jan 26 12:57:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:43 np0005596060 python3.9[213838]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:57:43 np0005596060 setroubleshoot[213588]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f850db21-48bb-4485-bf93-dc45304ce06a
Jan 26 12:57:43 np0005596060 systemd[1]: Reloading.
Jan 26 12:57:43 np0005596060 setroubleshoot[213588]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 26 12:57:43 np0005596060 setroubleshoot[213588]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l f850db21-48bb-4485-bf93-dc45304ce06a
Jan 26 12:57:43 np0005596060 setroubleshoot[213588]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 26 12:57:43 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:57:43 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:57:44
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'images', 'vms']
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:57:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:44.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:44 np0005596060 systemd[1]: Listening on libvirt locking daemon socket.
Jan 26 12:57:44 np0005596060 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 26 12:57:44 np0005596060 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 26 12:57:44 np0005596060 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 26 12:57:44 np0005596060 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 26 12:57:44 np0005596060 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 26 12:57:44 np0005596060 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 26 12:57:44 np0005596060 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 26 12:57:44 np0005596060 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 26 12:57:44 np0005596060 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 26 12:57:44 np0005596060 systemd[1]: Starting libvirt QEMU daemon...
Jan 26 12:57:44 np0005596060 systemd[1]: Started libvirt QEMU daemon.
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:57:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:57:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:44.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:45 np0005596060 python3.9[214054]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:57:45 np0005596060 systemd[1]: Reloading.
Jan 26 12:57:45 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:57:45 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:57:45 np0005596060 systemd[1]: Starting libvirt secret daemon socket...
Jan 26 12:57:45 np0005596060 systemd[1]: Listening on libvirt secret daemon socket.
Jan 26 12:57:45 np0005596060 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 26 12:57:45 np0005596060 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 26 12:57:45 np0005596060 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 26 12:57:45 np0005596060 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 26 12:57:45 np0005596060 systemd[1]: Starting libvirt secret daemon...
Jan 26 12:57:45 np0005596060 systemd[1]: Started libvirt secret daemon.
Jan 26 12:57:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:46.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:57:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:46.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:57:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:47 np0005596060 podman[214239]: 2026-01-26 17:57:47.330358933 +0000 UTC m=+0.092675084 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:57:47 np0005596060 python3.9[214278]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:57:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:48.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:57:48 np0005596060 python3.9[214437]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 12:57:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:48.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:49 np0005596060 python3.9[214589]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:57:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:50.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:50 np0005596060 python3.9[214744]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 12:57:50 np0005596060 podman[214769]: 2026-01-26 17:57:50.840398451 +0000 UTC m=+0.102081131 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 12:57:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:50.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:51 np0005596060 python3.9[214921]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:52 np0005596060 python3.9[215043]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769450271.0244193-3383-232411556980409/.source.xml follow=False _original_basename=secret.xml.j2 checksum=f5640975c7830314b4ada1f1cfe8314b62b47503 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:57:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:52.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:57:52 np0005596060 python3.9[215195]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine d4cd1917-5876-51b6-bc64-65a16199754d#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:57:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:52.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:53 np0005596060 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 26 12:57:53 np0005596060 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 26 12:57:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:57:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:54.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:57:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:54.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:55 np0005596060 python3.9[215358]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.003000075s ======
Jan 26 12:57:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:56.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000075s
Jan 26 12:57:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:56.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:57:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:58 np0005596060 python3.9[215823]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:57:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:57:58.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:58 np0005596060 python3.9[215975]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:57:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:57:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:57:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:57:58.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:57:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:57:59 np0005596060 python3.9[216099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769450278.2934144-3548-206888091435220/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:00.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:00 np0005596060 python3.9[216251]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:00.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:01 np0005596060 python3.9[216454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:02.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:02 np0005596060 python3.9[216532]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:02.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:03 np0005596060 python3.9[216684]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:58:03 np0005596060 python3.9[216762]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.34ctjq29 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:04.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:04 np0005596060 python3.9[216915]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:04 np0005596060 python3.9[216993]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:04.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:05 np0005596060 python3.9[217146]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:58:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:06.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:06 np0005596060 python3[217299]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 12:58:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:06.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:07 np0005596060 python3.9[217451]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:07 np0005596060 python3.9[217530]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:08.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:08 np0005596060 python3.9[217682]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:08.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:09 np0005596060 python3.9[217807]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450288.1786687-3815-33871613865113/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:10 np0005596060 python3.9[217960]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:10.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:10 np0005596060 python3.9[218038]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:10.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:11 np0005596060 python3.9[218190]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:12 np0005596060 python3.9[218270]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:12.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:12.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:12 np0005596060 python3.9[218422]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:13 np0005596060 python3.9[218547]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769450292.331246-3932-45498490873607/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:58:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:58:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:58:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:58:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:58:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:58:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:14.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:14 np0005596060 python3.9[218814]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:58:14 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev cf9d2d02-72ae-46f2-98b4-ac168f0a25b9 does not exist
Jan 26 12:58:14 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 42556cf4-f6c3-4b12-a8f1-3c65bb140698 does not exist
Jan 26 12:58:14 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b0a0ca57-3e89-43c9-986a-e5aad4611677 does not exist
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:58:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:58:14.728 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 12:58:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:58:14.730 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 12:58:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:58:14.730 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:58:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:58:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:14.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:15 np0005596060 python3.9[219083]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:58:15 np0005596060 podman[219126]: 2026-01-26 17:58:15.141371227 +0000 UTC m=+0.046131072 container create 6b66aac0e676468d471be1fea54430e1286c6841cd1167fa9dc440e4b8ecc97c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:58:15 np0005596060 systemd[1]: Started libpod-conmon-6b66aac0e676468d471be1fea54430e1286c6841cd1167fa9dc440e4b8ecc97c.scope.
Jan 26 12:58:15 np0005596060 podman[219126]: 2026-01-26 17:58:15.122983334 +0000 UTC m=+0.027743209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:58:15 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:58:15 np0005596060 podman[219126]: 2026-01-26 17:58:15.24074893 +0000 UTC m=+0.145508825 container init 6b66aac0e676468d471be1fea54430e1286c6841cd1167fa9dc440e4b8ecc97c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:58:15 np0005596060 podman[219126]: 2026-01-26 17:58:15.248456004 +0000 UTC m=+0.153215849 container start 6b66aac0e676468d471be1fea54430e1286c6841cd1167fa9dc440e4b8ecc97c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:58:15 np0005596060 podman[219126]: 2026-01-26 17:58:15.25230393 +0000 UTC m=+0.157063795 container attach 6b66aac0e676468d471be1fea54430e1286c6841cd1167fa9dc440e4b8ecc97c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:58:15 np0005596060 jovial_khorana[219165]: 167 167
Jan 26 12:58:15 np0005596060 systemd[1]: libpod-6b66aac0e676468d471be1fea54430e1286c6841cd1167fa9dc440e4b8ecc97c.scope: Deactivated successfully.
Jan 26 12:58:15 np0005596060 conmon[219165]: conmon 6b66aac0e676468d471b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b66aac0e676468d471be1fea54430e1286c6841cd1167fa9dc440e4b8ecc97c.scope/container/memory.events
Jan 26 12:58:15 np0005596060 podman[219126]: 2026-01-26 17:58:15.258666731 +0000 UTC m=+0.163426576 container died 6b66aac0e676468d471be1fea54430e1286c6841cd1167fa9dc440e4b8ecc97c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 12:58:15 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0369e1f3db63c1908a84e3b6d1f72b7fcb159d67055dde7e1970836ef0f91ae8-merged.mount: Deactivated successfully.
Jan 26 12:58:15 np0005596060 podman[219126]: 2026-01-26 17:58:15.311630004 +0000 UTC m=+0.216389849 container remove 6b66aac0e676468d471be1fea54430e1286c6841cd1167fa9dc440e4b8ecc97c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_khorana, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:58:15 np0005596060 systemd[1]: libpod-conmon-6b66aac0e676468d471be1fea54430e1286c6841cd1167fa9dc440e4b8ecc97c.scope: Deactivated successfully.
Jan 26 12:58:15 np0005596060 podman[219247]: 2026-01-26 17:58:15.459478127 +0000 UTC m=+0.026546060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:58:15 np0005596060 podman[219247]: 2026-01-26 17:58:15.559594158 +0000 UTC m=+0.126662061 container create d4cdfc0c8799c2ae106b3c412544eac4d6ef0efaedbe64ec0a04848544a203aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 12:58:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:15 np0005596060 systemd[1]: Started libpod-conmon-d4cdfc0c8799c2ae106b3c412544eac4d6ef0efaedbe64ec0a04848544a203aa.scope.
Jan 26 12:58:15 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:58:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c075f6213a1af84dbe429d5dfd2ff070e6cd8692986df3425d1dc810a02fee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c075f6213a1af84dbe429d5dfd2ff070e6cd8692986df3425d1dc810a02fee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c075f6213a1af84dbe429d5dfd2ff070e6cd8692986df3425d1dc810a02fee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c075f6213a1af84dbe429d5dfd2ff070e6cd8692986df3425d1dc810a02fee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c075f6213a1af84dbe429d5dfd2ff070e6cd8692986df3425d1dc810a02fee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:15 np0005596060 podman[219247]: 2026-01-26 17:58:15.761949563 +0000 UTC m=+0.329017496 container init d4cdfc0c8799c2ae106b3c412544eac4d6ef0efaedbe64ec0a04848544a203aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:58:15 np0005596060 podman[219247]: 2026-01-26 17:58:15.773784561 +0000 UTC m=+0.340852464 container start d4cdfc0c8799c2ae106b3c412544eac4d6ef0efaedbe64ec0a04848544a203aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 12:58:15 np0005596060 podman[219247]: 2026-01-26 17:58:15.786643155 +0000 UTC m=+0.353711078 container attach d4cdfc0c8799c2ae106b3c412544eac4d6ef0efaedbe64ec0a04848544a203aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:58:16 np0005596060 python3.9[219344]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:16.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:16 np0005596060 thirsty_wescoff[219287]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:58:16 np0005596060 thirsty_wescoff[219287]: --> relative data size: 1.0
Jan 26 12:58:16 np0005596060 thirsty_wescoff[219287]: --> All data devices are unavailable
Jan 26 12:58:16 np0005596060 systemd[1]: libpod-d4cdfc0c8799c2ae106b3c412544eac4d6ef0efaedbe64ec0a04848544a203aa.scope: Deactivated successfully.
Jan 26 12:58:16 np0005596060 podman[219247]: 2026-01-26 17:58:16.616945691 +0000 UTC m=+1.184013684 container died d4cdfc0c8799c2ae106b3c412544eac4d6ef0efaedbe64ec0a04848544a203aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:58:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 12:58:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:16.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 12:58:16 np0005596060 python3.9[219514]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:58:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:17 np0005596060 systemd[1]: var-lib-containers-storage-overlay-22c075f6213a1af84dbe429d5dfd2ff070e6cd8692986df3425d1dc810a02fee-merged.mount: Deactivated successfully.
Jan 26 12:58:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:17 np0005596060 python3.9[219681]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:58:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:18.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:18 np0005596060 python3.9[219835]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:58:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:18.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:19 np0005596060 python3.9[219992]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:19 np0005596060 podman[219247]: 2026-01-26 17:58:19.534993433 +0000 UTC m=+4.102061336 container remove d4cdfc0c8799c2ae106b3c412544eac4d6ef0efaedbe64ec0a04848544a203aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wescoff, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:58:19 np0005596060 systemd[1]: libpod-conmon-d4cdfc0c8799c2ae106b3c412544eac4d6ef0efaedbe64ec0a04848544a203aa.scope: Deactivated successfully.
Jan 26 12:58:19 np0005596060 podman[219644]: 2026-01-26 17:58:19.607080158 +0000 UTC m=+2.110517441 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 12:58:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:20 np0005596060 python3.9[220255]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:58:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:20.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:58:20 np0005596060 podman[220319]: 2026-01-26 17:58:20.241309257 +0000 UTC m=+0.027261037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:58:20 np0005596060 podman[220319]: 2026-01-26 17:58:20.348727542 +0000 UTC m=+0.134679272 container create 10f4e0dcea7cce229ba774320d9ad9f1687a9d88fa94afa5e210bd7569af0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 12:58:20 np0005596060 systemd[1]: Started libpod-conmon-10f4e0dcea7cce229ba774320d9ad9f1687a9d88fa94afa5e210bd7569af0f8a.scope.
Jan 26 12:58:20 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:58:20 np0005596060 podman[220319]: 2026-01-26 17:58:20.58974705 +0000 UTC m=+0.375698840 container init 10f4e0dcea7cce229ba774320d9ad9f1687a9d88fa94afa5e210bd7569af0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dhawan, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:58:20 np0005596060 podman[220319]: 2026-01-26 17:58:20.600364948 +0000 UTC m=+0.386316708 container start 10f4e0dcea7cce229ba774320d9ad9f1687a9d88fa94afa5e210bd7569af0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 12:58:20 np0005596060 podman[220319]: 2026-01-26 17:58:20.605070366 +0000 UTC m=+0.391022146 container attach 10f4e0dcea7cce229ba774320d9ad9f1687a9d88fa94afa5e210bd7569af0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dhawan, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:58:20 np0005596060 charming_dhawan[220437]: 167 167
Jan 26 12:58:20 np0005596060 systemd[1]: libpod-10f4e0dcea7cce229ba774320d9ad9f1687a9d88fa94afa5e210bd7569af0f8a.scope: Deactivated successfully.
Jan 26 12:58:20 np0005596060 conmon[220437]: conmon 10f4e0dcea7cce229ba7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-10f4e0dcea7cce229ba774320d9ad9f1687a9d88fa94afa5e210bd7569af0f8a.scope/container/memory.events
Jan 26 12:58:20 np0005596060 podman[220442]: 2026-01-26 17:58:20.655751512 +0000 UTC m=+0.025343779 container died 10f4e0dcea7cce229ba774320d9ad9f1687a9d88fa94afa5e210bd7569af0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:58:20 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b607f0e054e83d4e5d03421aafba439630af830ca63e3cdaf0e5057246521c7a-merged.mount: Deactivated successfully.
Jan 26 12:58:20 np0005596060 python3.9[220432]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769450299.6013758-4148-32004618233553/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:20 np0005596060 podman[220442]: 2026-01-26 17:58:20.696693793 +0000 UTC m=+0.066286060 container remove 10f4e0dcea7cce229ba774320d9ad9f1687a9d88fa94afa5e210bd7569af0f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dhawan, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 12:58:20 np0005596060 systemd[1]: libpod-conmon-10f4e0dcea7cce229ba774320d9ad9f1687a9d88fa94afa5e210bd7569af0f8a.scope: Deactivated successfully.
Jan 26 12:58:20 np0005596060 podman[220488]: 2026-01-26 17:58:20.927404062 +0000 UTC m=+0.059807717 container create 67f14348496cd0bf8462131e3dc3573bff259c9b5cc6ca37e27004ce593e8c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 26 12:58:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:20.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:20 np0005596060 systemd[1]: Started libpod-conmon-67f14348496cd0bf8462131e3dc3573bff259c9b5cc6ca37e27004ce593e8c1e.scope.
Jan 26 12:58:20 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:58:20 np0005596060 podman[220488]: 2026-01-26 17:58:20.902912585 +0000 UTC m=+0.035316260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:58:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb37833583a5704c805485b75658a8c1a48b9faff38c898c79e1b9aa4b1d583/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb37833583a5704c805485b75658a8c1a48b9faff38c898c79e1b9aa4b1d583/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb37833583a5704c805485b75658a8c1a48b9faff38c898c79e1b9aa4b1d583/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb37833583a5704c805485b75658a8c1a48b9faff38c898c79e1b9aa4b1d583/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:21 np0005596060 podman[220488]: 2026-01-26 17:58:21.018322781 +0000 UTC m=+0.150726446 container init 67f14348496cd0bf8462131e3dc3573bff259c9b5cc6ca37e27004ce593e8c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:58:21 np0005596060 podman[220488]: 2026-01-26 17:58:21.030315663 +0000 UTC m=+0.162719318 container start 67f14348496cd0bf8462131e3dc3573bff259c9b5cc6ca37e27004ce593e8c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 12:58:21 np0005596060 podman[220488]: 2026-01-26 17:58:21.034647532 +0000 UTC m=+0.167051167 container attach 67f14348496cd0bf8462131e3dc3573bff259c9b5cc6ca37e27004ce593e8c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 12:58:21 np0005596060 podman[220543]: 2026-01-26 17:58:21.120992747 +0000 UTC m=+0.140498429 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 12:58:21 np0005596060 python3.9[220664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:21 np0005596060 focused_feistel[220552]: {
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:    "1": [
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:        {
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "devices": [
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "/dev/loop3"
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            ],
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "lv_name": "ceph_lv0",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "lv_size": "7511998464",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "name": "ceph_lv0",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "tags": {
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.cluster_name": "ceph",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.crush_device_class": "",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.encrypted": "0",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.osd_id": "1",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.type": "block",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:                "ceph.vdo": "0"
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            },
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "type": "block",
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:            "vg_name": "ceph_vg0"
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:        }
Jan 26 12:58:21 np0005596060 focused_feistel[220552]:    ]
Jan 26 12:58:21 np0005596060 focused_feistel[220552]: }
Jan 26 12:58:21 np0005596060 systemd[1]: libpod-67f14348496cd0bf8462131e3dc3573bff259c9b5cc6ca37e27004ce593e8c1e.scope: Deactivated successfully.
Jan 26 12:58:21 np0005596060 podman[220488]: 2026-01-26 17:58:21.892326528 +0000 UTC m=+1.024730163 container died 67f14348496cd0bf8462131e3dc3573bff259c9b5cc6ca37e27004ce593e8c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 26 12:58:21 np0005596060 python3.9[220834]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769450300.8977609-4193-28586910596578/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:22.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-adb37833583a5704c805485b75658a8c1a48b9faff38c898c79e1b9aa4b1d583-merged.mount: Deactivated successfully.
Jan 26 12:58:22 np0005596060 podman[220488]: 2026-01-26 17:58:22.359348806 +0000 UTC m=+1.491752441 container remove 67f14348496cd0bf8462131e3dc3573bff259c9b5cc6ca37e27004ce593e8c1e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_feistel, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:58:22 np0005596060 systemd[1]: libpod-conmon-67f14348496cd0bf8462131e3dc3573bff259c9b5cc6ca37e27004ce593e8c1e.scope: Deactivated successfully.
Jan 26 12:58:22 np0005596060 python3.9[221039]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:22.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:23 np0005596060 podman[221236]: 2026-01-26 17:58:23.068336918 +0000 UTC m=+0.052105713 container create 5fc39d9df42dfe46e5cb39344068f50204f319e5dd3e9fc1cd21faed490ca4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:58:23 np0005596060 systemd[1]: Started libpod-conmon-5fc39d9df42dfe46e5cb39344068f50204f319e5dd3e9fc1cd21faed490ca4de.scope.
Jan 26 12:58:23 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:58:23 np0005596060 podman[221236]: 2026-01-26 17:58:23.048597771 +0000 UTC m=+0.032366566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:58:23 np0005596060 podman[221236]: 2026-01-26 17:58:23.165694969 +0000 UTC m=+0.149463864 container init 5fc39d9df42dfe46e5cb39344068f50204f319e5dd3e9fc1cd21faed490ca4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:58:23 np0005596060 podman[221236]: 2026-01-26 17:58:23.176728377 +0000 UTC m=+0.160497182 container start 5fc39d9df42dfe46e5cb39344068f50204f319e5dd3e9fc1cd21faed490ca4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 12:58:23 np0005596060 podman[221236]: 2026-01-26 17:58:23.180796279 +0000 UTC m=+0.164565094 container attach 5fc39d9df42dfe46e5cb39344068f50204f319e5dd3e9fc1cd21faed490ca4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 12:58:23 np0005596060 ecstatic_hopper[221281]: 167 167
Jan 26 12:58:23 np0005596060 systemd[1]: libpod-5fc39d9df42dfe46e5cb39344068f50204f319e5dd3e9fc1cd21faed490ca4de.scope: Deactivated successfully.
Jan 26 12:58:23 np0005596060 conmon[221281]: conmon 5fc39d9df42dfe46e5cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fc39d9df42dfe46e5cb39344068f50204f319e5dd3e9fc1cd21faed490ca4de.scope/container/memory.events
Jan 26 12:58:23 np0005596060 podman[221236]: 2026-01-26 17:58:23.184437571 +0000 UTC m=+0.168206366 container died 5fc39d9df42dfe46e5cb39344068f50204f319e5dd3e9fc1cd21faed490ca4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:58:23 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a7953604c091f9a3bad95cf4195d1bf6a4a0eb415f7ab1ec5ac9c72c518dd886-merged.mount: Deactivated successfully.
Jan 26 12:58:23 np0005596060 podman[221236]: 2026-01-26 17:58:23.230148512 +0000 UTC m=+0.213917307 container remove 5fc39d9df42dfe46e5cb39344068f50204f319e5dd3e9fc1cd21faed490ca4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 12:58:23 np0005596060 systemd[1]: libpod-conmon-5fc39d9df42dfe46e5cb39344068f50204f319e5dd3e9fc1cd21faed490ca4de.scope: Deactivated successfully.
Jan 26 12:58:23 np0005596060 python3.9[221279]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769450302.225224-4238-251390600994314/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:23 np0005596060 podman[221305]: 2026-01-26 17:58:23.401575818 +0000 UTC m=+0.049532378 container create 5165776b6781c5152091ccc7e89e8202de0d9fef468d1c8f88f2bfa52028b68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 12:58:23 np0005596060 systemd[1]: Started libpod-conmon-5165776b6781c5152091ccc7e89e8202de0d9fef468d1c8f88f2bfa52028b68b.scope.
Jan 26 12:58:23 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:58:23 np0005596060 podman[221305]: 2026-01-26 17:58:23.382523618 +0000 UTC m=+0.030480198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:58:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e9a05bda92187c8bdfaff74d01449560fcbf0ed390cb5f0735e8f50c7f1834/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e9a05bda92187c8bdfaff74d01449560fcbf0ed390cb5f0735e8f50c7f1834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e9a05bda92187c8bdfaff74d01449560fcbf0ed390cb5f0735e8f50c7f1834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e9a05bda92187c8bdfaff74d01449560fcbf0ed390cb5f0735e8f50c7f1834/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:58:23 np0005596060 podman[221305]: 2026-01-26 17:58:23.495218586 +0000 UTC m=+0.143175176 container init 5165776b6781c5152091ccc7e89e8202de0d9fef468d1c8f88f2bfa52028b68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 26 12:58:23 np0005596060 podman[221305]: 2026-01-26 17:58:23.504469779 +0000 UTC m=+0.152426329 container start 5165776b6781c5152091ccc7e89e8202de0d9fef468d1c8f88f2bfa52028b68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 12:58:23 np0005596060 podman[221305]: 2026-01-26 17:58:23.510393658 +0000 UTC m=+0.158350238 container attach 5165776b6781c5152091ccc7e89e8202de0d9fef468d1c8f88f2bfa52028b68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 12:58:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:24 np0005596060 python3.9[221479]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:58:24 np0005596060 systemd[1]: Reloading.
Jan 26 12:58:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:24.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:24 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:58:24 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:58:24 np0005596060 jovial_dirac[221347]: {
Jan 26 12:58:24 np0005596060 jovial_dirac[221347]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:58:24 np0005596060 jovial_dirac[221347]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:58:24 np0005596060 jovial_dirac[221347]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:58:24 np0005596060 jovial_dirac[221347]:        "osd_id": 1,
Jan 26 12:58:24 np0005596060 jovial_dirac[221347]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:58:24 np0005596060 jovial_dirac[221347]:        "type": "bluestore"
Jan 26 12:58:24 np0005596060 jovial_dirac[221347]:    }
Jan 26 12:58:24 np0005596060 jovial_dirac[221347]: }
Jan 26 12:58:24 np0005596060 podman[221305]: 2026-01-26 17:58:24.412089462 +0000 UTC m=+1.060046052 container died 5165776b6781c5152091ccc7e89e8202de0d9fef468d1c8f88f2bfa52028b68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 12:58:24 np0005596060 systemd[1]: libpod-5165776b6781c5152091ccc7e89e8202de0d9fef468d1c8f88f2bfa52028b68b.scope: Deactivated successfully.
Jan 26 12:58:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-17e9a05bda92187c8bdfaff74d01449560fcbf0ed390cb5f0735e8f50c7f1834-merged.mount: Deactivated successfully.
Jan 26 12:58:24 np0005596060 podman[221305]: 2026-01-26 17:58:24.611643836 +0000 UTC m=+1.259600396 container remove 5165776b6781c5152091ccc7e89e8202de0d9fef468d1c8f88f2bfa52028b68b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:58:24 np0005596060 systemd[1]: libpod-conmon-5165776b6781c5152091ccc7e89e8202de0d9fef468d1c8f88f2bfa52028b68b.scope: Deactivated successfully.
Jan 26 12:58:24 np0005596060 systemd[1]: Reached target edpm_libvirt.target.
Jan 26 12:58:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:58:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:58:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:58:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:58:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 40f81fb4-6b17-49ae-8214-ab77915bf44c does not exist
Jan 26 12:58:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ef796525-1a12-497e-8d98-1deb4b67a662 does not exist
Jan 26 12:58:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev f6cf6d9f-2dee-4e20-bc21-aab098caf299 does not exist
Jan 26 12:58:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:24.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:25 np0005596060 python3.9[221751]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 26 12:58:25 np0005596060 systemd[1]: Reloading.
Jan 26 12:58:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:25 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:58:25 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:58:25 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:58:25 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:58:25 np0005596060 systemd[1]: Reloading.
Jan 26 12:58:26 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:58:26 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:58:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:26.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:26 np0005596060 systemd[1]: session-49.scope: Deactivated successfully.
Jan 26 12:58:26 np0005596060 systemd[1]: session-49.scope: Consumed 3min 48.655s CPU time.
Jan 26 12:58:26 np0005596060 systemd-logind[786]: Session 49 logged out. Waiting for processes to exit.
Jan 26 12:58:26 np0005596060 systemd-logind[786]: Removed session 49.
Jan 26 12:58:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:26.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:28.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:28.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:30.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:30.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:31 np0005596060 systemd-logind[786]: New session 50 of user zuul.
Jan 26 12:58:31 np0005596060 systemd[1]: Started Session 50 of User zuul.
Jan 26 12:58:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:32.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:32 np0005596060 python3.9[222006]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:58:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:32.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:34.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:34 np0005596060 python3.9[222161]: ansible-ansible.builtin.service_facts Invoked
Jan 26 12:58:34 np0005596060 network[222178]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 12:58:34 np0005596060 network[222179]: 'network-scripts' will be removed from distribution in near future.
Jan 26 12:58:34 np0005596060 network[222180]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 12:58:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:34.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:36.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:36.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:38.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:58:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:38.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:58:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:39 np0005596060 python3.9[222455]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 12:58:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:40.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:40 np0005596060 python3.9[222539]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:58:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:40.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:58:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:42.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:58:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:42.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:58:44
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'default.rgw.control', 'default.rgw.log', 'vms', 'backups', 'default.rgw.meta', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:58:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:44.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:58:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:58:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:44.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:46.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:46.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:47 np0005596060 python3.9[222746]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:58:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:48.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:48.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:49 np0005596060 python3.9[222898]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:58:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:49 np0005596060 podman[222976]: 2026-01-26 17:58:49.887040287 +0000 UTC m=+0.077406640 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 12:58:50 np0005596060 python3.9[223071]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:58:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:50.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:50 np0005596060 python3.9[223223]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:58:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:50.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:51 np0005596060 podman[223348]: 2026-01-26 17:58:51.528368893 +0000 UTC m=+0.103750063 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 12:58:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:51 np0005596060 python3.9[223398]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:58:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:58:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:52.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:58:52 np0005596060 python3.9[223527]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769450331.1571965-245-218823103402065/.source.iscsi _original_basename=.9ey31z97 follow=False checksum=57238ff0c11135c1ad9751aaa65e9f64a0018aa8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:52.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:53 np0005596060 python3.9[223679]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:54 np0005596060 python3.9[223832]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:58:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:54.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 12:58:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3699 writes, 16K keys, 3699 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3699 writes, 3699 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1352 writes, 5547 keys, 1352 commit groups, 1.0 writes per commit group, ingest: 9.52 MB, 0.02 MB/s#012Interval WAL: 1352 writes, 1352 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      9.7      2.03              0.08         7    0.289       0      0       0.0       0.0#012  L6      1/0    7.51 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6     41.0     33.1      1.54              0.17         6    0.257     26K   3368       0.0       0.0#012 Sum      1/0    7.51 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6     17.8     19.9      3.57              0.24        13    0.275     26K   3368       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.3     13.9     13.9      2.50              0.13         6    0.416     14K   2053       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0     41.0     33.1      1.54              0.17         6    0.257     26K   3368       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      9.7      2.02              0.08         6    0.337       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.019, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 3.6 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 2.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5652937211f0#2 capacity: 304.00 MB usage: 2.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000138 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(115,2.04 MB,0.672009%) FilterBlock(14,82.92 KB,0.0266376%) IndexBlock(14,170.42 KB,0.0547459%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 26 12:58:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:58:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:54.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:58:55 np0005596060 python3.9[223984]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:58:55 np0005596060 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 26 12:58:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:58:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:56.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:58:56 np0005596060 python3.9[224141]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:58:56 np0005596060 systemd[1]: Reloading.
Jan 26 12:58:56 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:58:56 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:58:56 np0005596060 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 26 12:58:56 np0005596060 systemd[1]: Starting Open-iSCSI...
Jan 26 12:58:56 np0005596060 kernel: Loading iSCSI transport class v2.0-870.
Jan 26 12:58:56 np0005596060 systemd[1]: Started Open-iSCSI.
Jan 26 12:58:56 np0005596060 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 26 12:58:56 np0005596060 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 26 12:58:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:56.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:58:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:58:57 np0005596060 python3.9[224340]: ansible-ansible.builtin.service_facts Invoked
Jan 26 12:58:58 np0005596060 network[224357]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 12:58:58 np0005596060 network[224358]: 'network-scripts' will be removed from distribution in near future.
Jan 26 12:58:58 np0005596060 network[224359]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 12:58:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:58:58.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:58:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:58:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:58:59.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:58:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:00.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:02.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 12:59:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:04.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:04 np0005596060 python3.9[224684]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:59:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:05.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:06.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:07.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:07 np0005596060 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 12:59:07 np0005596060 systemd[1]: Starting man-db-cache-update.service...
Jan 26 12:59:07 np0005596060 systemd[1]: Reloading.
Jan 26 12:59:07 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:59:07 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:59:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:07 np0005596060 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 12:59:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:07 np0005596060 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 12:59:07 np0005596060 systemd[1]: Finished man-db-cache-update.service.
Jan 26 12:59:07 np0005596060 systemd[1]: run-r90edfbeaae804c25969807baba3d3875.service: Deactivated successfully.
Jan 26 12:59:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:08.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:08 np0005596060 python3.9[225001]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 26 12:59:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:09.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:09 np0005596060 python3.9[225154]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 26 12:59:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:10.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:10 np0005596060 python3.9[225310]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:59:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:11.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:11 np0005596060 python3.9[225433]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769450350.1794024-509-237529647398391/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:12 np0005596060 python3.9[225586]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:12.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:13.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:13 np0005596060 python3.9[225738]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:59:13 np0005596060 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 26 12:59:13 np0005596060 systemd[1]: Stopped Load Kernel Modules.
Jan 26 12:59:13 np0005596060 systemd[1]: Stopping Load Kernel Modules...
Jan 26 12:59:13 np0005596060 systemd[1]: Starting Load Kernel Modules...
Jan 26 12:59:13 np0005596060 systemd[1]: Finished Load Kernel Modules.
Jan 26 12:59:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:59:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:59:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:59:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:59:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:59:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:59:14 np0005596060 python3.9[225895]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:59:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:59:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:14.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:59:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:59:14.730 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 12:59:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:59:14.732 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 12:59:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 17:59:14.733 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 12:59:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:15.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:15 np0005596060 python3.9[226048]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:59:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:15 np0005596060 python3.9[226201]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:59:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:16.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:16 np0005596060 python3.9[226324]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769450355.4232042-662-83908115281216/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 12:59:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:17.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 12:59:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:17 np0005596060 python3.9[226476]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:59:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:17 np0005596060 python3.9[226630]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:18.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:19 np0005596060 python3.9[226782]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:19.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:19 np0005596060 python3.9[226935]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:20 np0005596060 podman[227059]: 2026-01-26 17:59:20.325269829 +0000 UTC m=+0.076591420 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 12:59:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:20.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:20 np0005596060 python3.9[227104]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:21.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:21 np0005596060 python3.9[227259]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:21 np0005596060 podman[227384]: 2026-01-26 17:59:21.686280328 +0000 UTC m=+0.113079428 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 26 12:59:21 np0005596060 python3.9[227431]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:22.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:22 np0005596060 python3.9[227639]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:23.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:23 np0005596060 python3.9[227791]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 12:59:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:23 np0005596060 python3.9[227946]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 12:59:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:24.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:24 np0005596060 python3.9[228099]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:59:24 np0005596060 systemd[1]: Listening on multipathd control socket.
Jan 26 12:59:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:25.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:25 np0005596060 python3.9[228355]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:59:25 np0005596060 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 26 12:59:25 np0005596060 udevadm[228382]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 26 12:59:25 np0005596060 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 26 12:59:25 np0005596060 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 26 12:59:25 np0005596060 multipathd[228397]: --------start up--------
Jan 26 12:59:25 np0005596060 multipathd[228397]: read /etc/multipath.conf
Jan 26 12:59:25 np0005596060 multipathd[228397]: path checkers start up
Jan 26 12:59:25 np0005596060 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 26 12:59:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:59:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:59:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 12:59:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:59:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 12:59:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:59:26 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 2ee54db5-3741-46b5-94d6-f1a4d02579a2 does not exist
Jan 26 12:59:26 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 2a59593b-f94d-4f0b-a35e-4ebdf339743c does not exist
Jan 26 12:59:26 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 07a202d1-8b0e-4e9d-bed9-7779adceea3f does not exist
Jan 26 12:59:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 12:59:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 12:59:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 12:59:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:59:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 12:59:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 12:59:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:26.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:26 np0005596060 podman[228568]: 2026-01-26 17:59:26.801514422 +0000 UTC m=+0.048000830 container create e432ae6a8b2415776ff348d53fc3395b4f0fcccd8aabaae73dac6a1388098b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 12:59:26 np0005596060 systemd[1]: Started libpod-conmon-e432ae6a8b2415776ff348d53fc3395b4f0fcccd8aabaae73dac6a1388098b2f.scope.
Jan 26 12:59:26 np0005596060 podman[228568]: 2026-01-26 17:59:26.778335458 +0000 UTC m=+0.024821916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:59:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:59:26 np0005596060 podman[228568]: 2026-01-26 17:59:26.89476663 +0000 UTC m=+0.141253068 container init e432ae6a8b2415776ff348d53fc3395b4f0fcccd8aabaae73dac6a1388098b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 12:59:26 np0005596060 podman[228568]: 2026-01-26 17:59:26.9043091 +0000 UTC m=+0.150795498 container start e432ae6a8b2415776ff348d53fc3395b4f0fcccd8aabaae73dac6a1388098b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 12:59:26 np0005596060 podman[228568]: 2026-01-26 17:59:26.907798328 +0000 UTC m=+0.154284756 container attach e432ae6a8b2415776ff348d53fc3395b4f0fcccd8aabaae73dac6a1388098b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:59:26 np0005596060 optimistic_blackwell[228584]: 167 167
Jan 26 12:59:26 np0005596060 systemd[1]: libpod-e432ae6a8b2415776ff348d53fc3395b4f0fcccd8aabaae73dac6a1388098b2f.scope: Deactivated successfully.
Jan 26 12:59:26 np0005596060 podman[228568]: 2026-01-26 17:59:26.9126587 +0000 UTC m=+0.159145108 container died e432ae6a8b2415776ff348d53fc3395b4f0fcccd8aabaae73dac6a1388098b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:59:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 12:59:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:59:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 12:59:26 np0005596060 systemd[1]: var-lib-containers-storage-overlay-09b0c51a78bb2f352666c6efe1512734da88185d23fd9bf5571dca0c2d9ef088-merged.mount: Deactivated successfully.
Jan 26 12:59:26 np0005596060 podman[228568]: 2026-01-26 17:59:26.953306984 +0000 UTC m=+0.199793392 container remove e432ae6a8b2415776ff348d53fc3395b4f0fcccd8aabaae73dac6a1388098b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_blackwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 12:59:26 np0005596060 systemd[1]: libpod-conmon-e432ae6a8b2415776ff348d53fc3395b4f0fcccd8aabaae73dac6a1388098b2f.scope: Deactivated successfully.
Jan 26 12:59:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:27.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:27 np0005596060 podman[228608]: 2026-01-26 17:59:27.139769229 +0000 UTC m=+0.055028957 container create 7188cc42acd9c7b0f05f7cbff4f7ee26750670bda06b9c4294a0298a4d9db9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 12:59:27 np0005596060 systemd[1]: Started libpod-conmon-7188cc42acd9c7b0f05f7cbff4f7ee26750670bda06b9c4294a0298a4d9db9e0.scope.
Jan 26 12:59:27 np0005596060 podman[228608]: 2026-01-26 17:59:27.117941489 +0000 UTC m=+0.033201267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:59:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:59:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeafa75ce036ade447014c0fad0e99fee2666eb459099ead1e8491df3faadb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeafa75ce036ade447014c0fad0e99fee2666eb459099ead1e8491df3faadb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeafa75ce036ade447014c0fad0e99fee2666eb459099ead1e8491df3faadb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeafa75ce036ade447014c0fad0e99fee2666eb459099ead1e8491df3faadb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0eeafa75ce036ade447014c0fad0e99fee2666eb459099ead1e8491df3faadb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:27 np0005596060 podman[228608]: 2026-01-26 17:59:27.239727936 +0000 UTC m=+0.154987684 container init 7188cc42acd9c7b0f05f7cbff4f7ee26750670bda06b9c4294a0298a4d9db9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 12:59:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:27 np0005596060 podman[228608]: 2026-01-26 17:59:27.248405964 +0000 UTC m=+0.163665692 container start 7188cc42acd9c7b0f05f7cbff4f7ee26750670bda06b9c4294a0298a4d9db9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 26 12:59:27 np0005596060 podman[228608]: 2026-01-26 17:59:27.251780549 +0000 UTC m=+0.167040297 container attach 7188cc42acd9c7b0f05f7cbff4f7ee26750670bda06b9c4294a0298a4d9db9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_neumann, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 12:59:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:27 np0005596060 python3.9[228757]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 26 12:59:28 np0005596060 sad_neumann[228624]: --> passed data devices: 0 physical, 1 LVM
Jan 26 12:59:28 np0005596060 sad_neumann[228624]: --> relative data size: 1.0
Jan 26 12:59:28 np0005596060 sad_neumann[228624]: --> All data devices are unavailable
Jan 26 12:59:28 np0005596060 systemd[1]: libpod-7188cc42acd9c7b0f05f7cbff4f7ee26750670bda06b9c4294a0298a4d9db9e0.scope: Deactivated successfully.
Jan 26 12:59:28 np0005596060 podman[228608]: 2026-01-26 17:59:28.060367358 +0000 UTC m=+0.975627086 container died 7188cc42acd9c7b0f05f7cbff4f7ee26750670bda06b9c4294a0298a4d9db9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 12:59:28 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0eeafa75ce036ade447014c0fad0e99fee2666eb459099ead1e8491df3faadb1-merged.mount: Deactivated successfully.
Jan 26 12:59:28 np0005596060 podman[228608]: 2026-01-26 17:59:28.174480892 +0000 UTC m=+1.089740620 container remove 7188cc42acd9c7b0f05f7cbff4f7ee26750670bda06b9c4294a0298a4d9db9e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_neumann, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 12:59:28 np0005596060 systemd[1]: libpod-conmon-7188cc42acd9c7b0f05f7cbff4f7ee26750670bda06b9c4294a0298a4d9db9e0.scope: Deactivated successfully.
Jan 26 12:59:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:28.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:28 np0005596060 python3.9[229040]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 26 12:59:28 np0005596060 kernel: Key type psk registered
Jan 26 12:59:28 np0005596060 podman[229073]: 2026-01-26 17:59:28.85395659 +0000 UTC m=+0.048498862 container create fbed4c29ca5b744c4a43d7b366f6193dcd2b671c11e743c787c3a8cf83940668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:59:28 np0005596060 systemd[1]: Started libpod-conmon-fbed4c29ca5b744c4a43d7b366f6193dcd2b671c11e743c787c3a8cf83940668.scope.
Jan 26 12:59:28 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:59:28 np0005596060 podman[229073]: 2026-01-26 17:59:28.83645551 +0000 UTC m=+0.030997802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:59:28 np0005596060 podman[229073]: 2026-01-26 17:59:28.937553605 +0000 UTC m=+0.132095927 container init fbed4c29ca5b744c4a43d7b366f6193dcd2b671c11e743c787c3a8cf83940668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hopper, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 12:59:28 np0005596060 podman[229073]: 2026-01-26 17:59:28.950721957 +0000 UTC m=+0.145264229 container start fbed4c29ca5b744c4a43d7b366f6193dcd2b671c11e743c787c3a8cf83940668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hopper, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 26 12:59:28 np0005596060 podman[229073]: 2026-01-26 17:59:28.955886447 +0000 UTC m=+0.150428799 container attach fbed4c29ca5b744c4a43d7b366f6193dcd2b671c11e743c787c3a8cf83940668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:59:28 np0005596060 busy_hopper[229094]: 167 167
Jan 26 12:59:28 np0005596060 systemd[1]: libpod-fbed4c29ca5b744c4a43d7b366f6193dcd2b671c11e743c787c3a8cf83940668.scope: Deactivated successfully.
Jan 26 12:59:28 np0005596060 podman[229073]: 2026-01-26 17:59:28.960456922 +0000 UTC m=+0.154999194 container died fbed4c29ca5b744c4a43d7b366f6193dcd2b671c11e743c787c3a8cf83940668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hopper, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Jan 26 12:59:28 np0005596060 systemd[1]: var-lib-containers-storage-overlay-436638a3227c8a623fc7fdca8b68579f484c4a0920c74bc6ca0b67938c49c829-merged.mount: Deactivated successfully.
Jan 26 12:59:29 np0005596060 podman[229073]: 2026-01-26 17:59:29.005462675 +0000 UTC m=+0.200004947 container remove fbed4c29ca5b744c4a43d7b366f6193dcd2b671c11e743c787c3a8cf83940668 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_hopper, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 12:59:29 np0005596060 systemd[1]: libpod-conmon-fbed4c29ca5b744c4a43d7b366f6193dcd2b671c11e743c787c3a8cf83940668.scope: Deactivated successfully.
Jan 26 12:59:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:29.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:29 np0005596060 podman[229194]: 2026-01-26 17:59:29.196070474 +0000 UTC m=+0.048350588 container create 3900994a1898b432be3504db3c404c3e653234be15d61eceba84b0371232ab96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:59:29 np0005596060 systemd[1]: Started libpod-conmon-3900994a1898b432be3504db3c404c3e653234be15d61eceba84b0371232ab96.scope.
Jan 26 12:59:29 np0005596060 podman[229194]: 2026-01-26 17:59:29.177009674 +0000 UTC m=+0.029289808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:59:29 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:59:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bea78ed801daa5370177621aab12f84377d31e565fc17cb961d80ce37da6875/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bea78ed801daa5370177621aab12f84377d31e565fc17cb961d80ce37da6875/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bea78ed801daa5370177621aab12f84377d31e565fc17cb961d80ce37da6875/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bea78ed801daa5370177621aab12f84377d31e565fc17cb961d80ce37da6875/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:29 np0005596060 podman[229194]: 2026-01-26 17:59:29.295507798 +0000 UTC m=+0.147787922 container init 3900994a1898b432be3504db3c404c3e653234be15d61eceba84b0371232ab96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_saha, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 12:59:29 np0005596060 podman[229194]: 2026-01-26 17:59:29.305933161 +0000 UTC m=+0.158213275 container start 3900994a1898b432be3504db3c404c3e653234be15d61eceba84b0371232ab96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_saha, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 12:59:29 np0005596060 podman[229194]: 2026-01-26 17:59:29.30907309 +0000 UTC m=+0.161353204 container attach 3900994a1898b432be3504db3c404c3e653234be15d61eceba84b0371232ab96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:59:29 np0005596060 python3.9[229293]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 12:59:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:30 np0005596060 nervous_saha[229237]: {
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:    "1": [
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:        {
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "devices": [
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "/dev/loop3"
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            ],
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "lv_name": "ceph_lv0",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "lv_size": "7511998464",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "name": "ceph_lv0",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "tags": {
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.cephx_lockbox_secret": "",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.cluster_name": "ceph",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.crush_device_class": "",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.encrypted": "0",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.osd_id": "1",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.type": "block",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:                "ceph.vdo": "0"
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            },
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "type": "block",
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:            "vg_name": "ceph_vg0"
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:        }
Jan 26 12:59:30 np0005596060 nervous_saha[229237]:    ]
Jan 26 12:59:30 np0005596060 nervous_saha[229237]: }
Jan 26 12:59:30 np0005596060 python3.9[229417]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769450369.078704-1052-139760843290024/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:30 np0005596060 systemd[1]: libpod-3900994a1898b432be3504db3c404c3e653234be15d61eceba84b0371232ab96.scope: Deactivated successfully.
Jan 26 12:59:30 np0005596060 podman[229194]: 2026-01-26 17:59:30.173595916 +0000 UTC m=+1.025876030 container died 3900994a1898b432be3504db3c404c3e653234be15d61eceba84b0371232ab96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_saha, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 12:59:30 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8bea78ed801daa5370177621aab12f84377d31e565fc17cb961d80ce37da6875-merged.mount: Deactivated successfully.
Jan 26 12:59:30 np0005596060 podman[229194]: 2026-01-26 17:59:30.252950444 +0000 UTC m=+1.105230558 container remove 3900994a1898b432be3504db3c404c3e653234be15d61eceba84b0371232ab96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_saha, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:59:30 np0005596060 systemd[1]: libpod-conmon-3900994a1898b432be3504db3c404c3e653234be15d61eceba84b0371232ab96.scope: Deactivated successfully.
Jan 26 12:59:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:30.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:30 np0005596060 python3.9[229687]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:30 np0005596060 podman[229726]: 2026-01-26 17:59:30.960863499 +0000 UTC m=+0.048593615 container create 6ea6b9100a36cda54fc60d4c12b23048b97654e2cb8a732331c0f90225335abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 12:59:31 np0005596060 systemd[1]: Started libpod-conmon-6ea6b9100a36cda54fc60d4c12b23048b97654e2cb8a732331c0f90225335abb.scope.
Jan 26 12:59:31 np0005596060 podman[229726]: 2026-01-26 17:59:30.936525426 +0000 UTC m=+0.024255542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:59:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:59:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:31.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:31 np0005596060 podman[229726]: 2026-01-26 17:59:31.057143463 +0000 UTC m=+0.144873569 container init 6ea6b9100a36cda54fc60d4c12b23048b97654e2cb8a732331c0f90225335abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 12:59:31 np0005596060 podman[229726]: 2026-01-26 17:59:31.069688219 +0000 UTC m=+0.157418305 container start 6ea6b9100a36cda54fc60d4c12b23048b97654e2cb8a732331c0f90225335abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:59:31 np0005596060 podman[229726]: 2026-01-26 17:59:31.072662174 +0000 UTC m=+0.160392290 container attach 6ea6b9100a36cda54fc60d4c12b23048b97654e2cb8a732331c0f90225335abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:59:31 np0005596060 mystifying_curran[229766]: 167 167
Jan 26 12:59:31 np0005596060 systemd[1]: libpod-6ea6b9100a36cda54fc60d4c12b23048b97654e2cb8a732331c0f90225335abb.scope: Deactivated successfully.
Jan 26 12:59:31 np0005596060 podman[229726]: 2026-01-26 17:59:31.07649918 +0000 UTC m=+0.164229296 container died 6ea6b9100a36cda54fc60d4c12b23048b97654e2cb8a732331c0f90225335abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 12:59:31 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e56e2f07ebfaf65cd9ac3f8bf6056bc15024ab48068dd1422b9cbf5716e20b31-merged.mount: Deactivated successfully.
Jan 26 12:59:31 np0005596060 podman[229726]: 2026-01-26 17:59:31.126423378 +0000 UTC m=+0.214153494 container remove 6ea6b9100a36cda54fc60d4c12b23048b97654e2cb8a732331c0f90225335abb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:59:31 np0005596060 systemd[1]: libpod-conmon-6ea6b9100a36cda54fc60d4c12b23048b97654e2cb8a732331c0f90225335abb.scope: Deactivated successfully.
Jan 26 12:59:31 np0005596060 podman[229821]: 2026-01-26 17:59:31.302710686 +0000 UTC m=+0.048345418 container create b361b526a3d0391945fbfd54c117e6e00bb6c59a08cdc3e488fb5fe8e2cbcc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 12:59:31 np0005596060 systemd[1]: Started libpod-conmon-b361b526a3d0391945fbfd54c117e6e00bb6c59a08cdc3e488fb5fe8e2cbcc39.scope.
Jan 26 12:59:31 np0005596060 podman[229821]: 2026-01-26 17:59:31.280962749 +0000 UTC m=+0.026597511 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 12:59:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 12:59:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1873008c9dac545bbf2aba6aa7a124538c4ec22a86ddf226fd3ca70e7689aa0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1873008c9dac545bbf2aba6aa7a124538c4ec22a86ddf226fd3ca70e7689aa0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1873008c9dac545bbf2aba6aa7a124538c4ec22a86ddf226fd3ca70e7689aa0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1873008c9dac545bbf2aba6aa7a124538c4ec22a86ddf226fd3ca70e7689aa0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 12:59:31 np0005596060 podman[229821]: 2026-01-26 17:59:31.405765311 +0000 UTC m=+0.151400063 container init b361b526a3d0391945fbfd54c117e6e00bb6c59a08cdc3e488fb5fe8e2cbcc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 12:59:31 np0005596060 podman[229821]: 2026-01-26 17:59:31.415935147 +0000 UTC m=+0.161569879 container start b361b526a3d0391945fbfd54c117e6e00bb6c59a08cdc3e488fb5fe8e2cbcc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 12:59:31 np0005596060 podman[229821]: 2026-01-26 17:59:31.42002058 +0000 UTC m=+0.165655342 container attach b361b526a3d0391945fbfd54c117e6e00bb6c59a08cdc3e488fb5fe8e2cbcc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 12:59:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:31 np0005596060 python3.9[229940]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:59:31 np0005596060 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 26 12:59:31 np0005596060 systemd[1]: Stopped Load Kernel Modules.
Jan 26 12:59:31 np0005596060 systemd[1]: Stopping Load Kernel Modules...
Jan 26 12:59:31 np0005596060 systemd[1]: Starting Load Kernel Modules...
Jan 26 12:59:31 np0005596060 systemd[1]: Finished Load Kernel Modules.
Jan 26 12:59:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:32 np0005596060 musing_montalcini[229882]: {
Jan 26 12:59:32 np0005596060 musing_montalcini[229882]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 12:59:32 np0005596060 musing_montalcini[229882]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 12:59:32 np0005596060 musing_montalcini[229882]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 12:59:32 np0005596060 musing_montalcini[229882]:        "osd_id": 1,
Jan 26 12:59:32 np0005596060 musing_montalcini[229882]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 12:59:32 np0005596060 musing_montalcini[229882]:        "type": "bluestore"
Jan 26 12:59:32 np0005596060 musing_montalcini[229882]:    }
Jan 26 12:59:32 np0005596060 musing_montalcini[229882]: }
Jan 26 12:59:32 np0005596060 systemd[1]: libpod-b361b526a3d0391945fbfd54c117e6e00bb6c59a08cdc3e488fb5fe8e2cbcc39.scope: Deactivated successfully.
Jan 26 12:59:32 np0005596060 podman[229821]: 2026-01-26 17:59:32.3609084 +0000 UTC m=+1.106543152 container died b361b526a3d0391945fbfd54c117e6e00bb6c59a08cdc3e488fb5fe8e2cbcc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 12:59:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:32.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1873008c9dac545bbf2aba6aa7a124538c4ec22a86ddf226fd3ca70e7689aa0b-merged.mount: Deactivated successfully.
Jan 26 12:59:32 np0005596060 podman[229821]: 2026-01-26 17:59:32.422543832 +0000 UTC m=+1.168178564 container remove b361b526a3d0391945fbfd54c117e6e00bb6c59a08cdc3e488fb5fe8e2cbcc39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_montalcini, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 12:59:32 np0005596060 systemd[1]: libpod-conmon-b361b526a3d0391945fbfd54c117e6e00bb6c59a08cdc3e488fb5fe8e2cbcc39.scope: Deactivated successfully.
Jan 26 12:59:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 12:59:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:59:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 12:59:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:59:32 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 59377973-d269-4bf8-b7a6-4d0323c51c19 does not exist
Jan 26 12:59:32 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4161ae27-6059-4493-93b4-c4589dfbdb75 does not exist
Jan 26 12:59:32 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 2d1ac0bc-aa1a-4d10-952f-b9b0de229d0d does not exist
Jan 26 12:59:32 np0005596060 python3.9[230125]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 12:59:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:33.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:59:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 12:59:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:34.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:35.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:35 np0005596060 systemd[1]: Reloading.
Jan 26 12:59:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:35 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:59:35 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:59:35 np0005596060 systemd[1]: Reloading.
Jan 26 12:59:36 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:59:36 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:59:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:36.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:36 np0005596060 systemd-logind[786]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 26 12:59:36 np0005596060 systemd-logind[786]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 26 12:59:36 np0005596060 lvm[230292]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 12:59:36 np0005596060 lvm[230292]: VG ceph_vg0 finished
Jan 26 12:59:36 np0005596060 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 12:59:37 np0005596060 systemd[1]: Starting man-db-cache-update.service...
Jan 26 12:59:37 np0005596060 systemd[1]: Reloading.
Jan 26 12:59:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:37.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:37 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:59:37 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:59:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:37 np0005596060 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 12:59:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:38.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:38 np0005596060 python3.9[231645]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:59:38 np0005596060 systemd[1]: Stopping Open-iSCSI...
Jan 26 12:59:38 np0005596060 iscsid[224180]: iscsid shutting down.
Jan 26 12:59:38 np0005596060 systemd[1]: iscsid.service: Deactivated successfully.
Jan 26 12:59:38 np0005596060 systemd[1]: Stopped Open-iSCSI.
Jan 26 12:59:38 np0005596060 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 26 12:59:38 np0005596060 systemd[1]: Starting Open-iSCSI...
Jan 26 12:59:38 np0005596060 systemd[1]: Started Open-iSCSI.
Jan 26 12:59:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:39.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:39 np0005596060 python3.9[231802]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 12:59:39 np0005596060 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 26 12:59:39 np0005596060 multipathd[228397]: exit (signal)
Jan 26 12:59:39 np0005596060 multipathd[228397]: --------shut down-------
Jan 26 12:59:40 np0005596060 systemd[1]: multipathd.service: Deactivated successfully.
Jan 26 12:59:40 np0005596060 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 26 12:59:40 np0005596060 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 26 12:59:40 np0005596060 multipathd[231808]: --------start up--------
Jan 26 12:59:40 np0005596060 multipathd[231808]: read /etc/multipath.conf
Jan 26 12:59:40 np0005596060 multipathd[231808]: path checkers start up
Jan 26 12:59:40 np0005596060 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 26 12:59:40 np0005596060 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 12:59:40 np0005596060 systemd[1]: Finished man-db-cache-update.service.
Jan 26 12:59:40 np0005596060 systemd[1]: man-db-cache-update.service: Consumed 1.951s CPU time.
Jan 26 12:59:40 np0005596060 systemd[1]: run-r4a1c74c920de420c90763d5fdd95950b.service: Deactivated successfully.
Jan 26 12:59:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:40.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:41.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:41 np0005596060 python3.9[231966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 12:59:41 np0005596060 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 26 12:59:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:42 np0005596060 python3.9[232150]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 12:59:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:42.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:43 np0005596060 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 26 12:59:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:43.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:43 np0005596060 python3.9[232329]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 12:59:43 np0005596060 systemd[1]: Reloading.
Jan 26 12:59:43 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 12:59:43 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 12:59:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_17:59:44
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['images', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:59:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 12:59:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 26 12:59:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:44.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 26 12:59:44 np0005596060 python3.9[232515]: ansible-ansible.builtin.service_facts Invoked
Jan 26 12:59:44 np0005596060 network[232532]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 12:59:44 np0005596060 network[232533]: 'network-scripts' will be removed from distribution in near future.
Jan 26 12:59:44 np0005596060 network[232534]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 12:59:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:45.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:47.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:48.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:49.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:50.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:50 np0005596060 podman[232683]: 2026-01-26 17:59:50.824232758 +0000 UTC m=+0.073136019 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 12:59:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:51.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:51 np0005596060 podman[232727]: 2026-01-26 17:59:51.848408746 +0000 UTC m=+0.104448716 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 12:59:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:52 np0005596060 python3.9[232855]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:59:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:52.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:53.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:53 np0005596060 python3.9[233008]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:59:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:54 np0005596060 python3.9[233162]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:59:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:54.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:54 np0005596060 python3.9[233315]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:59:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:55.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:55 np0005596060 python3.9[233469]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:59:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:56.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:56 np0005596060 python3.9[233622]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:59:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 12:59:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:57.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 12:59:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 12:59:57 np0005596060 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 26 12:59:57 np0005596060 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 26 12:59:57 np0005596060 python3.9[233775]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:59:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 12:59:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:17:59:58.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:58 np0005596060 python3.9[233931]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 12:59:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 12:59:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 12:59:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:17:59:59.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 12:59:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 13:00:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:00.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:01.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:02.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:03.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:03 np0005596060 python3.9[234136]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:00:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:03 np0005596060 ceph-mon[74267]: overall HEALTH_OK
Jan 26 13:00:03 np0005596060 python3.9[234289]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:04.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:04 np0005596060 python3.9[234441]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:05.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:05 np0005596060 python3.9[234593]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:06 np0005596060 python3.9[234746]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:06.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:06 np0005596060 python3.9[234898]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:07.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:07 np0005596060 python3.9[235050]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:08 np0005596060 python3.9[235203]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:08.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:08 np0005596060 python3.9[235355]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:09.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:09 np0005596060 python3.9[235507]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:10 np0005596060 python3.9[235660]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:10.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:10 np0005596060 python3.9[235812]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:11.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:11 np0005596060 python3.9[235964]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:12 np0005596060 python3.9[236117]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:12.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:12 np0005596060 python3.9[236269]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 13:00:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:13.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 13:00:13 np0005596060 python3.9[236421]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:00:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:00:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:00:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:00:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:00:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:00:14 np0005596060 python3.9[236574]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 13:00:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:14.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:00:14.731 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:00:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:00:14.732 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:00:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:00:14.732 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.031390) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450415031485, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1462, "num_deletes": 251, "total_data_size": 2623995, "memory_usage": 2663584, "flush_reason": "Manual Compaction"}
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450415103100, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2582644, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15601, "largest_seqno": 17062, "table_properties": {"data_size": 2575905, "index_size": 3874, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13691, "raw_average_key_size": 19, "raw_value_size": 2562520, "raw_average_value_size": 3681, "num_data_blocks": 175, "num_entries": 696, "num_filter_entries": 696, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769450248, "oldest_key_time": 1769450248, "file_creation_time": 1769450415, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 71805 microseconds, and 7336 cpu microseconds.
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.103199) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2582644 bytes OK
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.103223) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.105936) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.105958) EVENT_LOG_v1 {"time_micros": 1769450415105950, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.105975) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2617878, prev total WAL file size 2617878, number of live WAL files 2.
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.107001) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2522KB)], [35(7688KB)]
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450415107121, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10455163, "oldest_snapshot_seqno": -1}
Jan 26 13:00:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:15.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4335 keys, 8395350 bytes, temperature: kUnknown
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450415164367, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8395350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8364571, "index_size": 18829, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107393, "raw_average_key_size": 24, "raw_value_size": 8284380, "raw_average_value_size": 1911, "num_data_blocks": 789, "num_entries": 4335, "num_filter_entries": 4335, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769450415, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.165066) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8395350 bytes
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.166600) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.1 rd, 146.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 7.5 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(7.3) write-amplify(3.3) OK, records in: 4850, records dropped: 515 output_compression: NoCompression
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.166651) EVENT_LOG_v1 {"time_micros": 1769450415166616, "job": 16, "event": "compaction_finished", "compaction_time_micros": 57422, "compaction_time_cpu_micros": 27258, "output_level": 6, "num_output_files": 1, "total_output_size": 8395350, "num_input_records": 4850, "num_output_records": 4335, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450415167414, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450415169021, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.106876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.169149) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.169157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.169159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.169160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:15 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:15.169162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:15 np0005596060 python3.9[236726]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 13:00:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:16 np0005596060 python3.9[236879]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 13:00:16 np0005596060 systemd[1]: Reloading.
Jan 26 13:00:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:16.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:16 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 13:00:16 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 13:00:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:17.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:17 np0005596060 python3.9[237065]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 13:00:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:18.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:18 np0005596060 python3.9[237219]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 13:00:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 13:00:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:19.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 13:00:19 np0005596060 python3.9[237373]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 13:00:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:20 np0005596060 python3.9[237526]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 13:00:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:20.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:20 np0005596060 podman[237679]: 2026-01-26 18:00:20.946348247 +0000 UTC m=+0.058562223 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:00:21 np0005596060 python3.9[237680]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 13:00:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:21.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:21 np0005596060 python3.9[237853]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:22 np0005596060 podman[237908]: 2026-01-26 18:00:22.414899874 +0000 UTC m=+0.130418789 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 13:00:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:22.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:22.592140) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450422592253, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 303, "num_deletes": 255, "total_data_size": 108563, "memory_usage": 114824, "flush_reason": "Manual Compaction"}
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450422769063, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 108428, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17063, "largest_seqno": 17365, "table_properties": {"data_size": 106464, "index_size": 192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4526, "raw_average_key_size": 16, "raw_value_size": 102610, "raw_average_value_size": 367, "num_data_blocks": 9, "num_entries": 279, "num_filter_entries": 279, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769450416, "oldest_key_time": 1769450416, "file_creation_time": 1769450422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 177076 microseconds, and 1374 cpu microseconds.
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:00:22 np0005596060 python3.9[238079]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:22.769226) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 108428 bytes OK
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:22.769252) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:22.996397) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:22.996469) EVENT_LOG_v1 {"time_micros": 1769450422996451, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:22.996502) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 106367, prev total WAL file size 137133, number of live WAL files 2.
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:22.997191) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(105KB)], [38(8198KB)]
Jan 26 13:00:22 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450422997280, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 8503778, "oldest_snapshot_seqno": -1}
Jan 26 13:00:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:23.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4096 keys, 8149212 bytes, temperature: kUnknown
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450423235695, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8149212, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8120277, "index_size": 17625, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10245, "raw_key_size": 103720, "raw_average_key_size": 25, "raw_value_size": 8044296, "raw_average_value_size": 1963, "num_data_blocks": 725, "num_entries": 4096, "num_filter_entries": 4096, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769450422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:23.236085) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8149212 bytes
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:23.251247) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 35.7 rd, 34.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.0 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(153.6) write-amplify(75.2) OK, records in: 4614, records dropped: 518 output_compression: NoCompression
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:23.251326) EVENT_LOG_v1 {"time_micros": 1769450423251293, "job": 18, "event": "compaction_finished", "compaction_time_micros": 238512, "compaction_time_cpu_micros": 28746, "output_level": 6, "num_output_files": 1, "total_output_size": 8149212, "num_input_records": 4614, "num_output_records": 4096, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450423251645, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450423255127, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:22.997019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:23.255234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:23.255242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:23.255244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:23.255246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:00:23.255248) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:00:23 np0005596060 python3.9[238232]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 13:00:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:24.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:25.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:25 np0005596060 python3.9[238386]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:26 np0005596060 python3.9[238539]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:26.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:26 np0005596060 python3.9[238691]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:27.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:27 np0005596060 python3.9[238844]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:28 np0005596060 python3.9[238996]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:28.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:29 np0005596060 python3.9[239148]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:29 np0005596060 python3.9[239301]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:30 np0005596060 python3.9[239453]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:30.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:31 np0005596060 python3.9[239605]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:31 np0005596060 python3.9[239758]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:32.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:33.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:35.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:37.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:37 np0005596060 python3.9[240045]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 26 13:00:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:00:38 np0005596060 python3.9[240199]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 13:00:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:38.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:00:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:00:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 13:00:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:39.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 13:00:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:00:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:00:40 np0005596060 python3.9[240358]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 13:00:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:00:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:40.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:00:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 735e90b2-e31f-4fd2-b777-6c40672ab0b4 does not exist
Jan 26 13:00:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5767e51e-6e67-4986-8f76-79835a997c9c does not exist
Jan 26 13:00:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 26524cad-a764-43ca-b6b2-12350b550ca8 does not exist
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:00:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:00:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:41.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:41 np0005596060 podman[240532]: 2026-01-26 18:00:41.431468262 +0000 UTC m=+0.066168384 container create 77bc024c413904726db4d30f716787c95238e2c1ea9b3422d298519215af828c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_black, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:00:41 np0005596060 podman[240532]: 2026-01-26 18:00:41.394465772 +0000 UTC m=+0.029165904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:00:41 np0005596060 systemd[1]: Started libpod-conmon-77bc024c413904726db4d30f716787c95238e2c1ea9b3422d298519215af828c.scope.
Jan 26 13:00:41 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:00:41 np0005596060 podman[240532]: 2026-01-26 18:00:41.565905581 +0000 UTC m=+0.200605773 container init 77bc024c413904726db4d30f716787c95238e2c1ea9b3422d298519215af828c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_black, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 13:00:41 np0005596060 podman[240532]: 2026-01-26 18:00:41.580981049 +0000 UTC m=+0.215681191 container start 77bc024c413904726db4d30f716787c95238e2c1ea9b3422d298519215af828c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_black, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:00:41 np0005596060 podman[240532]: 2026-01-26 18:00:41.585300048 +0000 UTC m=+0.220000150 container attach 77bc024c413904726db4d30f716787c95238e2c1ea9b3422d298519215af828c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_black, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:00:41 np0005596060 suspicious_black[240549]: 167 167
Jan 26 13:00:41 np0005596060 systemd[1]: libpod-77bc024c413904726db4d30f716787c95238e2c1ea9b3422d298519215af828c.scope: Deactivated successfully.
Jan 26 13:00:41 np0005596060 podman[240532]: 2026-01-26 18:00:41.592124219 +0000 UTC m=+0.226824361 container died 77bc024c413904726db4d30f716787c95238e2c1ea9b3422d298519215af828c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_black, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:00:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:00:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:00:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:00:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:00:41 np0005596060 systemd[1]: var-lib-containers-storage-overlay-dfe25796aa537bab02a8fa995d0c96d9f1df072c15453ac1122b015a99093f17-merged.mount: Deactivated successfully.
Jan 26 13:00:41 np0005596060 podman[240532]: 2026-01-26 18:00:41.890882116 +0000 UTC m=+0.525582218 container remove 77bc024c413904726db4d30f716787c95238e2c1ea9b3422d298519215af828c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 13:00:41 np0005596060 systemd[1]: libpod-conmon-77bc024c413904726db4d30f716787c95238e2c1ea9b3422d298519215af828c.scope: Deactivated successfully.
Jan 26 13:00:42 np0005596060 podman[240576]: 2026-01-26 18:00:42.084727838 +0000 UTC m=+0.048873669 container create b3969fd7e056070feb88d61372dd615ec69b288da2607cae18536e18e697d35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:00:42 np0005596060 systemd-logind[786]: New session 51 of user zuul.
Jan 26 13:00:42 np0005596060 systemd[1]: Started Session 51 of User zuul.
Jan 26 13:00:42 np0005596060 podman[240576]: 2026-01-26 18:00:42.060259283 +0000 UTC m=+0.024405154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:00:42 np0005596060 systemd[1]: session-51.scope: Deactivated successfully.
Jan 26 13:00:42 np0005596060 systemd-logind[786]: Session 51 logged out. Waiting for processes to exit.
Jan 26 13:00:42 np0005596060 systemd-logind[786]: Removed session 51.
Jan 26 13:00:42 np0005596060 systemd[1]: Started libpod-conmon-b3969fd7e056070feb88d61372dd615ec69b288da2607cae18536e18e697d35d.scope.
Jan 26 13:00:42 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:00:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee897c08c7c44cf20dfc864185fd539674d70f190fd020bdff3ef1c53347bbb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee897c08c7c44cf20dfc864185fd539674d70f190fd020bdff3ef1c53347bbb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee897c08c7c44cf20dfc864185fd539674d70f190fd020bdff3ef1c53347bbb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee897c08c7c44cf20dfc864185fd539674d70f190fd020bdff3ef1c53347bbb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee897c08c7c44cf20dfc864185fd539674d70f190fd020bdff3ef1c53347bbb3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:42 np0005596060 podman[240576]: 2026-01-26 18:00:42.458931933 +0000 UTC m=+0.423077784 container init b3969fd7e056070feb88d61372dd615ec69b288da2607cae18536e18e697d35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 26 13:00:42 np0005596060 podman[240576]: 2026-01-26 18:00:42.466348819 +0000 UTC m=+0.430494650 container start b3969fd7e056070feb88d61372dd615ec69b288da2607cae18536e18e697d35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 13:00:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:42.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:42 np0005596060 podman[240576]: 2026-01-26 18:00:42.705488629 +0000 UTC m=+0.669634470 container attach b3969fd7e056070feb88d61372dd615ec69b288da2607cae18536e18e697d35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Jan 26 13:00:42 np0005596060 python3.9[240799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 13:00:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:43.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:43 np0005596060 stupefied_clarke[240619]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:00:43 np0005596060 stupefied_clarke[240619]: --> relative data size: 1.0
Jan 26 13:00:43 np0005596060 stupefied_clarke[240619]: --> All data devices are unavailable
Jan 26 13:00:43 np0005596060 systemd[1]: libpod-b3969fd7e056070feb88d61372dd615ec69b288da2607cae18536e18e697d35d.scope: Deactivated successfully.
Jan 26 13:00:43 np0005596060 podman[240576]: 2026-01-26 18:00:43.31592866 +0000 UTC m=+1.280074491 container died b3969fd7e056070feb88d61372dd615ec69b288da2607cae18536e18e697d35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:00:43 np0005596060 python3.9[240926]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769450442.4913716-2659-5300312385668/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:00:44
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.rgw.root', '.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'images']
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:00:44 np0005596060 python3.9[241093]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:00:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:00:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:44.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:44 np0005596060 python3.9[241170]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:44 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ee897c08c7c44cf20dfc864185fd539674d70f190fd020bdff3ef1c53347bbb3-merged.mount: Deactivated successfully.
Jan 26 13:00:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:45.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:45 np0005596060 python3.9[241320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 13:00:45 np0005596060 podman[240576]: 2026-01-26 18:00:45.46109126 +0000 UTC m=+3.425237091 container remove b3969fd7e056070feb88d61372dd615ec69b288da2607cae18536e18e697d35d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 13:00:45 np0005596060 systemd[1]: libpod-conmon-b3969fd7e056070feb88d61372dd615ec69b288da2607cae18536e18e697d35d.scope: Deactivated successfully.
Jan 26 13:00:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:45 np0005596060 python3.9[241542]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769450444.7949913-2659-111042959688134/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:46 np0005596060 podman[241610]: 2026-01-26 18:00:46.069981982 +0000 UTC m=+0.022346063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:00:46 np0005596060 podman[241610]: 2026-01-26 18:00:46.173874393 +0000 UTC m=+0.126238454 container create 2a2dca951c8f08af27402e6983da256b1b3ba35cde56a1aea0eeb234570b68a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 13:00:46 np0005596060 systemd[1]: Started libpod-conmon-2a2dca951c8f08af27402e6983da256b1b3ba35cde56a1aea0eeb234570b68a0.scope.
Jan 26 13:00:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:00:46 np0005596060 podman[241610]: 2026-01-26 18:00:46.285129699 +0000 UTC m=+0.237493770 container init 2a2dca951c8f08af27402e6983da256b1b3ba35cde56a1aea0eeb234570b68a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 13:00:46 np0005596060 podman[241610]: 2026-01-26 18:00:46.29191535 +0000 UTC m=+0.244279411 container start 2a2dca951c8f08af27402e6983da256b1b3ba35cde56a1aea0eeb234570b68a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:00:46 np0005596060 podman[241610]: 2026-01-26 18:00:46.297959691 +0000 UTC m=+0.250323802 container attach 2a2dca951c8f08af27402e6983da256b1b3ba35cde56a1aea0eeb234570b68a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:00:46 np0005596060 goofy_franklin[241704]: 167 167
Jan 26 13:00:46 np0005596060 systemd[1]: libpod-2a2dca951c8f08af27402e6983da256b1b3ba35cde56a1aea0eeb234570b68a0.scope: Deactivated successfully.
Jan 26 13:00:46 np0005596060 podman[241610]: 2026-01-26 18:00:46.299518271 +0000 UTC m=+0.251882342 container died 2a2dca951c8f08af27402e6983da256b1b3ba35cde56a1aea0eeb234570b68a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:00:46 np0005596060 systemd[1]: var-lib-containers-storage-overlay-021b3c24a00bae5701a23f1db6adefd2961911115442ce4cee5cd7a54951189d-merged.mount: Deactivated successfully.
Jan 26 13:00:46 np0005596060 podman[241610]: 2026-01-26 18:00:46.338951902 +0000 UTC m=+0.291315963 container remove 2a2dca951c8f08af27402e6983da256b1b3ba35cde56a1aea0eeb234570b68a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_franklin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 13:00:46 np0005596060 systemd[1]: libpod-conmon-2a2dca951c8f08af27402e6983da256b1b3ba35cde56a1aea0eeb234570b68a0.scope: Deactivated successfully.
Jan 26 13:00:46 np0005596060 python3.9[241759]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 13:00:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:46.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:46 np0005596060 podman[241771]: 2026-01-26 18:00:46.567595188 +0000 UTC m=+0.080187537 container create c35253aa0426518c1e2ed714864f028f7443da0602dee390fa0da89288151564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_margulis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:00:46 np0005596060 systemd[1]: Started libpod-conmon-c35253aa0426518c1e2ed714864f028f7443da0602dee390fa0da89288151564.scope.
Jan 26 13:00:46 np0005596060 podman[241771]: 2026-01-26 18:00:46.53505826 +0000 UTC m=+0.047650669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:00:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:00:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1d0552e5b21914f2ca80134f74188a59023b3dd0f9da135077ebe2499902e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1d0552e5b21914f2ca80134f74188a59023b3dd0f9da135077ebe2499902e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1d0552e5b21914f2ca80134f74188a59023b3dd0f9da135077ebe2499902e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1d0552e5b21914f2ca80134f74188a59023b3dd0f9da135077ebe2499902e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:46 np0005596060 podman[241771]: 2026-01-26 18:00:46.664118634 +0000 UTC m=+0.176710953 container init c35253aa0426518c1e2ed714864f028f7443da0602dee390fa0da89288151564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 13:00:46 np0005596060 podman[241771]: 2026-01-26 18:00:46.673183011 +0000 UTC m=+0.185775320 container start c35253aa0426518c1e2ed714864f028f7443da0602dee390fa0da89288151564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_margulis, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:00:46 np0005596060 podman[241771]: 2026-01-26 18:00:46.676853894 +0000 UTC m=+0.189446223 container attach c35253aa0426518c1e2ed714864f028f7443da0602dee390fa0da89288151564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_margulis, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:00:47 np0005596060 python3.9[241912]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769450446.066727-2659-173959258412323/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:47.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:47 np0005596060 musing_margulis[241811]: {
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:    "1": [
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:        {
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "devices": [
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "/dev/loop3"
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            ],
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "lv_name": "ceph_lv0",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "lv_size": "7511998464",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "name": "ceph_lv0",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "tags": {
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.cluster_name": "ceph",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.crush_device_class": "",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.encrypted": "0",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.osd_id": "1",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.type": "block",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:                "ceph.vdo": "0"
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            },
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "type": "block",
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:            "vg_name": "ceph_vg0"
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:        }
Jan 26 13:00:47 np0005596060 musing_margulis[241811]:    ]
Jan 26 13:00:47 np0005596060 musing_margulis[241811]: }
Jan 26 13:00:47 np0005596060 systemd[1]: libpod-c35253aa0426518c1e2ed714864f028f7443da0602dee390fa0da89288151564.scope: Deactivated successfully.
Jan 26 13:00:47 np0005596060 podman[242068]: 2026-01-26 18:00:47.569788374 +0000 UTC m=+0.028987069 container died c35253aa0426518c1e2ed714864f028f7443da0602dee390fa0da89288151564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:00:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4b1d0552e5b21914f2ca80134f74188a59023b3dd0f9da135077ebe2499902e3-merged.mount: Deactivated successfully.
Jan 26 13:00:47 np0005596060 podman[242068]: 2026-01-26 18:00:47.617033692 +0000 UTC m=+0.076232387 container remove c35253aa0426518c1e2ed714864f028f7443da0602dee390fa0da89288151564 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_margulis, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 13:00:47 np0005596060 systemd[1]: libpod-conmon-c35253aa0426518c1e2ed714864f028f7443da0602dee390fa0da89288151564.scope: Deactivated successfully.
Jan 26 13:00:47 np0005596060 python3.9[242066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 13:00:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:48.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:48 np0005596060 python3.9[242304]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769450447.1959574-2659-132169039341272/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:49 np0005596060 podman[242348]: 2026-01-26 18:00:49.008717295 +0000 UTC m=+0.023388688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:00:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:49.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:49 np0005596060 podman[242348]: 2026-01-26 18:00:49.387487634 +0000 UTC m=+0.402159017 container create ba7a178be463dac2ab0fce34189e95cd70a6e89499bb6e0fe1de9487e79ac091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bardeen, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:00:49 np0005596060 python3.9[242509]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 13:00:49 np0005596060 systemd[1]: Started libpod-conmon-ba7a178be463dac2ab0fce34189e95cd70a6e89499bb6e0fe1de9487e79ac091.scope.
Jan 26 13:00:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:49 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:00:49 np0005596060 podman[242348]: 2026-01-26 18:00:49.858994024 +0000 UTC m=+0.873665417 container init ba7a178be463dac2ab0fce34189e95cd70a6e89499bb6e0fe1de9487e79ac091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 26 13:00:49 np0005596060 podman[242348]: 2026-01-26 18:00:49.867678352 +0000 UTC m=+0.882349705 container start ba7a178be463dac2ab0fce34189e95cd70a6e89499bb6e0fe1de9487e79ac091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:00:49 np0005596060 quizzical_bardeen[242516]: 167 167
Jan 26 13:00:49 np0005596060 systemd[1]: libpod-ba7a178be463dac2ab0fce34189e95cd70a6e89499bb6e0fe1de9487e79ac091.scope: Deactivated successfully.
Jan 26 13:00:50 np0005596060 podman[242348]: 2026-01-26 18:00:50.013126518 +0000 UTC m=+1.027797891 container attach ba7a178be463dac2ab0fce34189e95cd70a6e89499bb6e0fe1de9487e79ac091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bardeen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:00:50 np0005596060 podman[242348]: 2026-01-26 18:00:50.013531528 +0000 UTC m=+1.028202891 container died ba7a178be463dac2ab0fce34189e95cd70a6e89499bb6e0fe1de9487e79ac091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 13:00:50 np0005596060 python3.9[242649]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769450449.1041195-2659-238790882419517/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:50.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:50 np0005596060 systemd[1]: var-lib-containers-storage-overlay-47d6851614a0dc22565e7ca9d43f105cd48ba27fab6b439f7613dbfcf4abea8a-merged.mount: Deactivated successfully.
Jan 26 13:00:50 np0005596060 podman[242348]: 2026-01-26 18:00:50.939336225 +0000 UTC m=+1.954007608 container remove ba7a178be463dac2ab0fce34189e95cd70a6e89499bb6e0fe1de9487e79ac091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_bardeen, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 13:00:50 np0005596060 systemd[1]: libpod-conmon-ba7a178be463dac2ab0fce34189e95cd70a6e89499bb6e0fe1de9487e79ac091.scope: Deactivated successfully.
Jan 26 13:00:51 np0005596060 podman[242805]: 2026-01-26 18:00:51.068968782 +0000 UTC m=+0.063346752 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 13:00:51 np0005596060 python3.9[242802]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:51.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:51 np0005596060 podman[242830]: 2026-01-26 18:00:51.184419444 +0000 UTC m=+0.100703612 container create 9476fbf68857ca1d141c841e5b174ca8f5fd5741c303c4800cffcd584780162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:00:51 np0005596060 podman[242830]: 2026-01-26 18:00:51.108849825 +0000 UTC m=+0.025134013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:00:51 np0005596060 systemd[1]: Started libpod-conmon-9476fbf68857ca1d141c841e5b174ca8f5fd5741c303c4800cffcd584780162a.scope.
Jan 26 13:00:51 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:00:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad4439dd12fe1ab97efe7e265caba7402e701d077d30d15e01750db5d1022a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad4439dd12fe1ab97efe7e265caba7402e701d077d30d15e01750db5d1022a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad4439dd12fe1ab97efe7e265caba7402e701d077d30d15e01750db5d1022a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34ad4439dd12fe1ab97efe7e265caba7402e701d077d30d15e01750db5d1022a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:00:51 np0005596060 podman[242830]: 2026-01-26 18:00:51.282436957 +0000 UTC m=+0.198721145 container init 9476fbf68857ca1d141c841e5b174ca8f5fd5741c303c4800cffcd584780162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:00:51 np0005596060 podman[242830]: 2026-01-26 18:00:51.290446669 +0000 UTC m=+0.206730837 container start 9476fbf68857ca1d141c841e5b174ca8f5fd5741c303c4800cffcd584780162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 13:00:51 np0005596060 podman[242830]: 2026-01-26 18:00:51.293138716 +0000 UTC m=+0.209422984 container attach 9476fbf68857ca1d141c841e5b174ca8f5fd5741c303c4800cffcd584780162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:00:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:51 np0005596060 python3.9[243004]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:00:52 np0005596060 dazzling_wu[242871]: {
Jan 26 13:00:52 np0005596060 dazzling_wu[242871]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:00:52 np0005596060 dazzling_wu[242871]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:00:52 np0005596060 dazzling_wu[242871]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:00:52 np0005596060 dazzling_wu[242871]:        "osd_id": 1,
Jan 26 13:00:52 np0005596060 dazzling_wu[242871]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:00:52 np0005596060 dazzling_wu[242871]:        "type": "bluestore"
Jan 26 13:00:52 np0005596060 dazzling_wu[242871]:    }
Jan 26 13:00:52 np0005596060 dazzling_wu[242871]: }
Jan 26 13:00:52 np0005596060 systemd[1]: libpod-9476fbf68857ca1d141c841e5b174ca8f5fd5741c303c4800cffcd584780162a.scope: Deactivated successfully.
Jan 26 13:00:52 np0005596060 podman[242830]: 2026-01-26 18:00:52.129754802 +0000 UTC m=+1.046038980 container died 9476fbf68857ca1d141c841e5b174ca8f5fd5741c303c4800cffcd584780162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:00:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:52 np0005596060 python3.9[243183]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 13:00:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:52.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:52 np0005596060 systemd[1]: var-lib-containers-storage-overlay-34ad4439dd12fe1ab97efe7e265caba7402e701d077d30d15e01750db5d1022a-merged.mount: Deactivated successfully.
Jan 26 13:00:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:53.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:53 np0005596060 python3.9[243346]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 13:00:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:53 np0005596060 python3.9[243470]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769450452.864497-2980-10964095107601/.source _original_basename=.gvbb8h3z follow=False checksum=e2b143a6e9c59921c5923903d0182d35b0ae4ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 26 13:00:54 np0005596060 podman[242830]: 2026-01-26 18:00:54.160205979 +0000 UTC m=+3.076490147 container remove 9476fbf68857ca1d141c841e5b174ca8f5fd5741c303c4800cffcd584780162a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:00:54 np0005596060 systemd[1]: libpod-conmon-9476fbf68857ca1d141c841e5b174ca8f5fd5741c303c4800cffcd584780162a.scope: Deactivated successfully.
Jan 26 13:00:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:00:54 np0005596060 podman[243185]: 2026-01-26 18:00:54.334481208 +0000 UTC m=+1.811461265 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:00:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:00:54 np0005596060 python3.9[243639]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 13:00:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:00:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:55.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:55 np0005596060 python3.9[243792]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 13:00:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:00:56 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6162d9c5-c227-4b2c-9858-c8ac91efd831 does not exist
Jan 26 13:00:56 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d00cc248-03c7-4b54-9fcb-0d5d7a4b2fde does not exist
Jan 26 13:00:56 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8909b884-13ed-40ec-90b6-aa3e7f310335 does not exist
Jan 26 13:00:56 np0005596060 python3.9[243932]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769450455.1687596-3058-6655182031360/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:56.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:56 np0005596060 python3.9[244113]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 13:00:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:00:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:00:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:00:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:57.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:00:57 np0005596060 python3.9[244234]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769450456.4468145-3103-263880817635519/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 13:00:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:00:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:00:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:58 np0005596060 python3.9[244387]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 26 13:00:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:00:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:00:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:00:59.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:00:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:00:59 np0005596060 python3.9[244540]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 13:01:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:00.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:01.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:01 np0005596060 python3[244692]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 13:01:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:01:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:02.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:03.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:01:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:04.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:05.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:06.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:07.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:08.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:01:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:09.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:10.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:11.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:01:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 8479 writes, 34K keys, 8479 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8479 writes, 1774 syncs, 4.78 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 514 writes, 857 keys, 514 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s#012Interval WAL: 514 writes, 213 syncs, 2.41 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556c7332c2d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556c7332c2d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_
Jan 26 13:01:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:12.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:13.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:01:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:01:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:01:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:01:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:01:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:01:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:14.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:01:14.732 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:01:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:01:14.733 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:01:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:01:14.733 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:01:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:15.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:16.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:17 np0005596060 ceph-mds[93477]: mds.beacon.cephfs.compute-0.wenkwv missed beacon ack from the monitors
Jan 26 13:01:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:17.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:18.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:19.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:20.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:21 np0005596060 ceph-mds[93477]: mds.beacon.cephfs.compute-0.wenkwv missed beacon ack from the monitors
Jan 26 13:01:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:21.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:22.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:23.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:23 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] Check health
Jan 26 13:01:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:01:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:24.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:01:25 np0005596060 ceph-mds[93477]: mds.beacon.cephfs.compute-0.wenkwv missed beacon ack from the monitors
Jan 26 13:01:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:25.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:26 np0005596060 podman[244838]: 2026-01-26 18:01:26.585960763 +0000 UTC m=+4.845759240 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 13:01:26 np0005596060 podman[244900]: 2026-01-26 18:01:26.615093456 +0000 UTC m=+1.881245740 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 26 13:01:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 check_health: resetting beacon timeouts due to mon delay (slow election?) of 17.5759 seconds
Jan 26 13:01:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:01:26 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 26 13:01:26 np0005596060 ceph-mon[74267]: paxos.0).electionLogic(15) init, last seen epoch 15, mid-election, bumping
Jan 26 13:01:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:26.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 13:01:26 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 26 13:01:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:27.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: mon.compute-1 calling monitor election
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: mon.compute-2 calling monitor election
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: mon.compute-2 is new leader, mons compute-2,compute-1 in quorum (ranks 1,2)
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: Health check failed: 1/3 mons down, quorum compute-2,compute-1 (MON_DOWN)
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-2,compute-1
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-2,compute-1
Jan 26 13:01:27 np0005596060 ceph-mon[74267]:    mon.compute-0 (rank 0) addr [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] is down (out of quorum)
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.oqvedy=up:active} 2 up:standby
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.mbryrf(active, since 21m), standbys: compute-2.cchxrf, compute-1.qpyzhk
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-2,compute-1)
Jan 26 13:01:27 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 26 13:01:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:27 np0005596060 podman[244704]: 2026-01-26 18:01:27.950735812 +0000 UTC m=+26.616630756 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 26 13:01:28 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 13:01:28 np0005596060 podman[244959]: 2026-01-26 18:01:28.077665382 +0000 UTC m=+0.026146628 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 26 13:01:28 np0005596060 ceph-mon[74267]: mon.compute-0 calling monitor election
Jan 26 13:01:28 np0005596060 ceph-mon[74267]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 26 13:01:28 np0005596060 ceph-mon[74267]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-2,compute-1)
Jan 26 13:01:28 np0005596060 ceph-mon[74267]: Cluster is now healthy
Jan 26 13:01:28 np0005596060 podman[244959]: 2026-01-26 18:01:28.357699419 +0000 UTC m=+0.306180585 container create a3fbc1de11280edbc92230cdb2ac506cdde0a0bfad95daae9f98bb8f03a77429 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 26 13:01:28 np0005596060 python3[244692]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 26 13:01:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:28.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:29.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:29 np0005596060 python3.9[245149]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 13:01:29 np0005596060 ceph-mon[74267]: overall HEALTH_OK
Jan 26 13:01:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:30 np0005596060 python3.9[245304]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 26 13:01:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:30.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:31.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:31 np0005596060 python3.9[245456]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 13:01:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:01:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:32 np0005596060 python3[245609]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 13:01:32 np0005596060 podman[245646]: 2026-01-26 18:01:32.486343256 +0000 UTC m=+0.021106512 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 26 13:01:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:32.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:32 np0005596060 podman[245646]: 2026-01-26 18:01:32.772365064 +0000 UTC m=+0.307128330 container create 7569887fe02f2c8198c9b9abc7100a68d6e8915b3199e6433af5977ed385d86f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 13:01:32 np0005596060 python3[245609]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 26 13:01:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:33.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:33 np0005596060 python3.9[245836]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 13:01:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:34 np0005596060 python3.9[245991]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:01:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:34.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:35 np0005596060 python3.9[246142]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769450494.528809-3391-104077411335031/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 13:01:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:35.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:36 np0005596060 python3.9[246219]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 13:01:36 np0005596060 systemd[1]: Reloading.
Jan 26 13:01:36 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 13:01:36 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 13:01:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:01:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:37 np0005596060 python3.9[246330]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 13:01:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:37.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:37 np0005596060 systemd[1]: Reloading.
Jan 26 13:01:37 np0005596060 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 13:01:37 np0005596060 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 13:01:37 np0005596060 systemd[1]: Starting nova_compute container...
Jan 26 13:01:37 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:01:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:37 np0005596060 podman[246370]: 2026-01-26 18:01:37.700652407 +0000 UTC m=+0.109816670 container init 7569887fe02f2c8198c9b9abc7100a68d6e8915b3199e6433af5977ed385d86f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 13:01:37 np0005596060 podman[246370]: 2026-01-26 18:01:37.709576432 +0000 UTC m=+0.118740665 container start 7569887fe02f2c8198c9b9abc7100a68d6e8915b3199e6433af5977ed385d86f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:01:37 np0005596060 podman[246370]: nova_compute
Jan 26 13:01:37 np0005596060 nova_compute[246386]: + sudo -E kolla_set_configs
Jan 26 13:01:37 np0005596060 systemd[1]: Started nova_compute container.
Jan 26 13:01:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Validating config file
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying service configuration files
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Deleting /etc/ceph
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Creating directory /etc/ceph
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/ceph
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Writing out command to execute
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 13:01:37 np0005596060 nova_compute[246386]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 13:01:37 np0005596060 nova_compute[246386]: ++ cat /run_command
Jan 26 13:01:37 np0005596060 nova_compute[246386]: + CMD=nova-compute
Jan 26 13:01:37 np0005596060 nova_compute[246386]: + ARGS=
Jan 26 13:01:37 np0005596060 nova_compute[246386]: + sudo kolla_copy_cacerts
Jan 26 13:01:37 np0005596060 nova_compute[246386]: + [[ ! -n '' ]]
Jan 26 13:01:37 np0005596060 nova_compute[246386]: + . kolla_extend_start
Jan 26 13:01:37 np0005596060 nova_compute[246386]: + echo 'Running command: '\''nova-compute'\'''
Jan 26 13:01:37 np0005596060 nova_compute[246386]: Running command: 'nova-compute'
Jan 26 13:01:37 np0005596060 nova_compute[246386]: + umask 0022
Jan 26 13:01:37 np0005596060 nova_compute[246386]: + exec nova-compute
Jan 26 13:01:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:38.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:39.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:40 np0005596060 nova_compute[246386]: 2026-01-26 18:01:40.073 246390 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 26 13:01:40 np0005596060 nova_compute[246386]: 2026-01-26 18:01:40.073 246390 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 26 13:01:40 np0005596060 nova_compute[246386]: 2026-01-26 18:01:40.074 246390 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 26 13:01:40 np0005596060 nova_compute[246386]: 2026-01-26 18:01:40.074 246390 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 26 13:01:40 np0005596060 nova_compute[246386]: 2026-01-26 18:01:40.238 246390 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:01:40 np0005596060 python3.9[246551]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 13:01:40 np0005596060 nova_compute[246386]: 2026-01-26 18:01:40.269 246390 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:01:40 np0005596060 nova_compute[246386]: 2026-01-26 18:01:40.270 246390 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 26 13:01:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:40.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:40 np0005596060 nova_compute[246386]: 2026-01-26 18:01:40.998 246390 INFO nova.virt.driver [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.136 246390 INFO nova.compute.provider_config [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.171 246390 DEBUG oslo_concurrency.lockutils [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.171 246390 DEBUG oslo_concurrency.lockutils [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.171 246390 DEBUG oslo_concurrency.lockutils [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.172 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.172 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.172 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.173 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.173 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.173 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.174 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.174 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.174 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.174 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.174 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.175 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.175 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.175 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.176 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.176 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.176 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.177 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.177 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.177 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.177 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.178 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.178 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.178 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.178 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.178 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.179 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.180 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.180 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.180 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.180 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.180 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.180 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.181 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.181 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.181 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.181 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.181 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.182 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.182 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.182 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.182 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.182 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.182 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.183 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.183 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.183 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.183 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.184 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.184 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.184 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.184 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.184 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.185 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.185 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.185 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.185 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.185 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.185 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.186 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.186 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.186 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.186 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.187 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.187 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.187 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.187 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.187 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.187 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.188 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.188 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.188 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.188 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.188 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.188 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.189 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.189 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.189 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.189 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.189 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.190 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.190 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.190 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.190 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.190 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.190 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.191 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.191 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.191 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.191 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.191 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.191 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.192 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.192 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.192 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.192 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.192 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.192 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.193 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.193 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.193 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.193 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.193 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.193 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.194 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.194 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.194 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.194 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.194 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.194 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.194 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.195 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.195 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.195 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.195 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.195 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.195 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.196 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.196 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.196 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.196 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.196 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.196 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.197 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.197 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.197 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.197 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.197 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.197 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.198 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.198 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.198 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.198 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.198 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.199 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.199 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.199 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.199 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.199 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.199 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.199 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.200 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.200 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.200 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.200 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.200 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.200 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.201 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.201 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.201 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.201 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.201 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.202 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.202 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.202 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.202 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.202 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.202 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.203 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.203 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.203 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.203 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.203 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.203 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.203 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.204 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.204 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.204 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.204 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.204 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.204 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.205 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.205 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.205 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.205 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.205 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.205 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.206 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.206 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.206 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.206 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.206 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.207 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.207 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.207 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.207 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.207 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.207 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.208 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.208 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.208 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.208 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.208 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.208 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.208 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.209 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.209 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.209 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.209 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.209 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.210 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.210 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.210 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.210 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.210 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.210 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.211 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.211 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.211 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.211 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.211 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.211 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.212 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.212 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.212 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.212 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.212 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.212 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.213 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.213 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.213 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.213 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.213 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.213 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.214 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.214 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.214 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.214 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.214 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.214 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.215 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.215 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.215 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.215 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.215 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.216 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.216 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.216 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.216 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.216 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.216 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.217 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.217 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.217 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.217 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.217 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.217 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.218 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.218 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.218 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.218 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.218 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.218 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.218 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.219 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.219 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.219 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.219 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.219 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.219 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.220 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.220 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.220 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.220 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.220 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.220 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.221 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.221 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.221 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.221 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.222 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.222 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.222 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.222 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.222 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.222 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.223 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.223 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.223 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.223 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.223 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.223 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.224 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.224 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.224 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.224 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.225 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.225 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.225 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.225 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.225 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.226 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.226 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.226 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.226 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.226 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.226 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.227 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.227 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.227 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.227 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:41.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.228 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.228 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.228 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.228 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.228 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.229 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.229 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.229 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.229 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.229 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.229 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.230 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.230 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.230 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.230 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.230 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.230 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.231 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.231 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.231 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.231 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.231 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.231 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.232 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.232 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.232 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.232 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.232 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.233 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.233 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.233 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.233 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.233 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.233 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.234 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.234 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.234 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.234 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.234 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.235 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.235 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.235 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.235 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.235 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.235 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.236 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.236 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.236 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.236 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.236 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.236 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.237 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.237 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.237 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.237 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.238 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.238 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.238 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.238 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.238 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.238 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.239 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.239 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.239 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.239 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.239 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.239 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.239 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.240 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.240 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.240 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.240 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.240 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.241 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.241 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.241 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.241 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.241 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.242 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.242 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.242 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.242 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.242 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.243 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.243 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 python3.9[246703]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.243 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.243 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.243 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.244 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.244 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.244 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.244 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.244 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.245 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.245 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.245 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.245 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.245 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.246 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.246 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.246 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.246 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.246 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.246 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.247 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.247 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.247 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.247 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.247 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.248 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.248 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.248 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.248 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.248 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.249 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.249 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.250 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.250 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.250 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.250 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.250 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.251 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.251 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.251 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.251 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.251 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.252 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.252 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.252 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.252 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.252 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.252 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.253 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.253 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.253 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.253 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.253 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.253 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.254 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.254 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.254 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.254 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.254 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.254 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.254 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.255 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.255 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.255 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.255 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.256 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.256 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.256 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.256 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.256 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.256 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.257 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.257 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.257 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.257 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.257 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.257 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.258 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.258 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.258 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.258 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.258 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.258 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.259 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.259 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.259 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.259 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.259 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.259 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.259 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.260 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.260 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.260 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.260 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.260 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.260 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.261 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.261 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.261 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.261 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.261 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.262 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.262 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.262 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.262 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.262 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.263 246390 WARNING oslo_config.cfg [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 26 13:01:41 np0005596060 nova_compute[246386]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 26 13:01:41 np0005596060 nova_compute[246386]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 26 13:01:41 np0005596060 nova_compute[246386]: and ``live_migration_inbound_addr`` respectively.
Jan 26 13:01:41 np0005596060 nova_compute[246386]: ).  Its value may be silently ignored in the future.#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.263 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.263 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.263 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.263 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.264 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.264 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.264 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.264 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.264 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.264 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.265 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.265 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.265 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.265 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.265 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.265 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.266 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.266 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.266 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.rbd_secret_uuid        = d4cd1917-5876-51b6-bc64-65a16199754d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.266 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.266 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.266 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.267 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.267 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.267 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.267 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.267 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.267 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.268 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.268 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.268 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.268 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.268 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.268 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.268 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.269 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.269 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.269 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.269 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.269 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.269 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.270 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.270 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.270 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.270 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.270 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.270 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.271 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.271 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.271 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.271 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.271 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.271 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.272 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.272 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.272 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.272 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.272 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.272 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.272 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.273 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.273 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.273 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.273 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.273 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.273 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.274 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.274 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.274 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.274 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.274 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.274 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.274 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.275 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.275 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.275 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.275 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.275 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.275 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.275 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.276 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.276 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.276 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.276 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.276 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.276 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.277 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.277 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.277 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.277 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.277 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.277 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.278 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.278 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.278 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.278 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.278 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.278 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.279 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.279 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.279 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.279 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.279 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.279 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.279 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.280 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.280 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.280 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.280 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.280 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.280 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.281 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.281 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.281 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.281 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.281 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.281 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.281 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.282 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.282 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.282 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.282 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.282 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.282 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.283 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.283 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.283 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.283 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.283 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.283 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.283 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.284 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.284 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.284 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.284 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.284 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.284 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.285 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.285 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.285 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.285 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.285 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.286 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.286 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.286 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.286 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.286 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.287 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.287 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.287 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.287 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.287 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.287 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.288 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.288 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.288 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.288 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.288 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.288 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.289 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.289 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.289 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.289 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.289 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.289 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.289 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.290 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.290 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.290 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.290 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.290 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.290 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.290 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.291 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.291 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.291 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.291 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.291 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.291 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.292 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.292 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.292 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.292 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.292 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.292 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.293 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.293 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.293 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.293 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.293 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.293 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.293 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.294 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.294 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.294 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.294 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.294 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.295 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.295 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.295 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.295 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.295 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.295 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.296 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.296 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.296 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.296 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.296 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.296 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.297 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.297 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.297 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.297 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.297 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.297 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.298 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.298 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.298 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.298 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.298 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.299 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.299 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.299 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.299 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.299 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.300 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.300 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.300 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.300 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.300 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.300 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.300 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.301 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.301 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.301 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.301 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.301 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.301 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.302 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.302 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.302 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.302 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.302 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.303 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.303 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.303 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.303 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.304 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.304 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.304 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.304 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.304 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.305 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.305 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.305 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.305 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.305 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.306 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.306 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.306 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.306 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.306 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.306 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.307 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.307 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.307 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.307 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.307 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.307 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.308 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.308 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.308 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.308 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.308 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.308 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.309 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.309 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.309 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.309 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.309 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.309 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.310 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.310 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.310 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.310 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.310 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.310 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.311 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.311 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.311 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.311 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.311 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.311 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.312 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.312 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.312 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.312 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.312 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.313 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.313 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.313 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.313 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.313 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.313 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.314 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.314 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.314 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.314 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.314 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.314 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.315 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.315 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.315 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.315 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.315 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.316 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.316 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.316 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.316 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.316 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.317 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.317 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.317 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.317 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.317 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.317 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.318 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.318 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.318 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.318 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.318 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.318 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.319 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.319 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.319 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.319 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.319 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.320 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.320 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.320 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.320 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.320 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.321 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.321 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.321 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.321 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.321 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.321 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.322 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.322 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.322 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.322 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.322 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.322 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.322 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.323 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.323 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.323 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.323 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.323 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.323 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.323 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.324 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.324 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.324 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.324 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.324 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.324 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.325 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.325 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.325 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.325 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.325 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.325 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.326 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.326 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.326 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.326 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.326 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.326 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.326 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.327 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.327 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.327 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.327 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.327 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.327 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.328 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.328 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.328 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.328 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.328 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.329 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.329 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.329 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.329 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.329 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.329 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.330 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.330 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.330 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.330 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.330 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.330 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.331 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.331 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.331 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.331 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.331 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.331 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.332 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.332 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.332 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.332 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.332 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.333 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.333 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.333 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.333 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.333 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.334 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.334 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.334 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.334 246390 DEBUG oslo_service.service [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.335 246390 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.355 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.355 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.356 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.356 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 26 13:01:41 np0005596060 systemd[1]: Starting libvirt QEMU daemon...
Jan 26 13:01:41 np0005596060 systemd[1]: Started libvirt QEMU daemon.
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.434 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f77a2f2a6a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.437 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f77a2f2a6a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.438 246390 INFO nova.virt.libvirt.driver [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.464 246390 WARNING nova.virt.libvirt.driver [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 26 13:01:41 np0005596060 nova_compute[246386]: 2026-01-26 18:01:41.466 246390 DEBUG nova.virt.libvirt.volume.mount [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 26 13:01:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:01:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:42 np0005596060 python3.9[246906]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.271 246390 INFO nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Libvirt host capabilities <capabilities>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <host>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <uuid>d27b7a41-30de-40e4-9f10-b4e4f5902919</uuid>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <arch>x86_64</arch>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model>EPYC-Rome-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <vendor>AMD</vendor>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <microcode version='16777317'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <signature family='23' model='49' stepping='0'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='x2apic'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='tsc-deadline'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='osxsave'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='hypervisor'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='tsc_adjust'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='spec-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='stibp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='arch-capabilities'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='cmp_legacy'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='topoext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='virt-ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='lbrv'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='tsc-scale'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='vmcb-clean'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='pause-filter'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='pfthreshold'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='svme-addr-chk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='rdctl-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='skip-l1dfl-vmentry'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='mds-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature name='pschange-mc-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <pages unit='KiB' size='4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <pages unit='KiB' size='2048'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <pages unit='KiB' size='1048576'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <power_management>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <suspend_mem/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </power_management>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <iommu support='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <migration_features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <live/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <uri_transports>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <uri_transport>tcp</uri_transport>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <uri_transport>rdma</uri_transport>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </uri_transports>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </migration_features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <topology>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <cells num='1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <cell id='0'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:          <memory unit='KiB'>7864316</memory>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:          <pages unit='KiB' size='4'>1966079</pages>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:          <pages unit='KiB' size='2048'>0</pages>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:          <distances>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:            <sibling id='0' value='10'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:          </distances>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:          <cpus num='8'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:          </cpus>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        </cell>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </cells>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </topology>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <cache>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </cache>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <secmodel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model>selinux</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <doi>0</doi>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </secmodel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <secmodel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model>dac</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <doi>0</doi>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </secmodel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </host>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <guest>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <os_type>hvm</os_type>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <arch name='i686'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <wordsize>32</wordsize>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <domain type='qemu'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <domain type='kvm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </arch>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <pae/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <nonpae/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <acpi default='on' toggle='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <apic default='on' toggle='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <cpuselection/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <deviceboot/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <disksnapshot default='on' toggle='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <externalSnapshot/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </guest>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <guest>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <os_type>hvm</os_type>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <arch name='x86_64'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <wordsize>64</wordsize>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <domain type='qemu'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <domain type='kvm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </arch>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <acpi default='on' toggle='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <apic default='on' toggle='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <cpuselection/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <deviceboot/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <disksnapshot default='on' toggle='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <externalSnapshot/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </guest>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 
Jan 26 13:01:42 np0005596060 nova_compute[246386]: </capabilities>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: #033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.279 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.302 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 26 13:01:42 np0005596060 nova_compute[246386]: <domainCapabilities>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <path>/usr/libexec/qemu-kvm</path>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <domain>kvm</domain>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <arch>i686</arch>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <vcpu max='4096'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <iothreads supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <os supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <enum name='firmware'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <loader supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>rom</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pflash</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='readonly'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>yes</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>no</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='secure'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>no</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </loader>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </os>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='host-passthrough' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='hostPassthroughMigratable'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>on</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>off</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='maximum' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='maximumMigratable'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>on</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>off</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='host-model' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <vendor>AMD</vendor>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='x2apic'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc-deadline'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='hypervisor'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc_adjust'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='spec-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='stibp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='cmp_legacy'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='overflow-recov'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='succor'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='amd-ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='virt-ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='lbrv'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc-scale'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='vmcb-clean'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='flushbyasid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='pause-filter'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='pfthreshold'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='svme-addr-chk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='disable' name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='custom' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='ClearwaterForest'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ddpd-u'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sha512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='ClearwaterForest-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ddpd-u'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sha512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Dhyana-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Turin'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vp2intersect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibpb-brtype'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbpb'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='srso-user-kernel-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Turin-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vp2intersect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibpb-brtype'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbpb'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='srso-user-kernel-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-128'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-256'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-128'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-256'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v6'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v7'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='KnightsMill'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4fmaps'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4vnniw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512er'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512pf'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='KnightsMill-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4fmaps'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4vnniw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512er'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512pf'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G4-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tbm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G5-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tbm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='athlon'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='athlon-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='core2duo'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='core2duo-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='coreduo'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='coreduo-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='n270'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='n270-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='phenom'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='phenom-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <memoryBacking supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <enum name='sourceType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>file</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>anonymous</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>memfd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </memoryBacking>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <devices>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <disk supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='diskDevice'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>disk</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>cdrom</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>floppy</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>lun</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='bus'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>fdc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>scsi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>sata</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-non-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </disk>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <graphics supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vnc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>egl-headless</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dbus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </graphics>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <video supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='modelType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vga</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>cirrus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>none</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>bochs</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>ramfb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </video>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <hostdev supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='mode'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>subsystem</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='startupPolicy'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>default</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>mandatory</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>requisite</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>optional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='subsysType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pci</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>scsi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='capsType'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='pciBackend'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </hostdev>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <rng supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-non-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>random</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>egd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>builtin</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </rng>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <filesystem supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='driverType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>path</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>handle</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtiofs</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </filesystem>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <tpm supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tpm-tis</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tpm-crb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>emulator</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>external</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendVersion'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>2.0</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </tpm>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <redirdev supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='bus'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </redirdev>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <channel supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pty</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>unix</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </channel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <crypto supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>qemu</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>builtin</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </crypto>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <interface supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>default</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>passt</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </interface>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <panic supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>isa</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>hyperv</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </panic>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <console supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>null</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pty</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dev</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>file</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pipe</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>stdio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>udp</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tcp</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>unix</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>qemu-vdagent</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dbus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </console>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </devices>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <gic supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <vmcoreinfo supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <genid supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <backingStoreInput supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <backup supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <async-teardown supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <s390-pv supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <ps2 supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <tdx supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <sev supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <sgx supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <hyperv supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='features'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>relaxed</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vapic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>spinlocks</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vpindex</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>runtime</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>synic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>stimer</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>reset</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vendor_id</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>frequencies</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>reenlightenment</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tlbflush</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>ipi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>avic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>emsr_bitmap</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>xmm_input</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <defaults>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <spinlocks>4095</spinlocks>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <stimer_direct>on</stimer_direct>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <tlbflush_direct>on</tlbflush_direct>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <tlbflush_extended>on</tlbflush_extended>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </defaults>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </hyperv>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <launchSecurity supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: </domainCapabilities>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.311 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 26 13:01:42 np0005596060 nova_compute[246386]: <domainCapabilities>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <path>/usr/libexec/qemu-kvm</path>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <domain>kvm</domain>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <arch>i686</arch>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <vcpu max='240'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <iothreads supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <os supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <enum name='firmware'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <loader supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>rom</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pflash</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='readonly'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>yes</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>no</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='secure'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>no</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </loader>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </os>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='host-passthrough' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='hostPassthroughMigratable'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>on</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>off</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='maximum' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='maximumMigratable'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>on</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>off</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='host-model' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <vendor>AMD</vendor>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='x2apic'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc-deadline'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='hypervisor'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc_adjust'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='spec-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='stibp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='cmp_legacy'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='overflow-recov'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='succor'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='amd-ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='virt-ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='lbrv'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc-scale'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='vmcb-clean'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='flushbyasid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='pause-filter'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='pfthreshold'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='svme-addr-chk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='disable' name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='custom' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='ClearwaterForest'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ddpd-u'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sha512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='ClearwaterForest-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ddpd-u'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sha512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Dhyana-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Turin'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vp2intersect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibpb-brtype'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbpb'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='srso-user-kernel-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Turin-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vp2intersect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibpb-brtype'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbpb'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='srso-user-kernel-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-128'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-256'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-128'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-256'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v6'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v7'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='KnightsMill'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4fmaps'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4vnniw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512er'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512pf'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='KnightsMill-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4fmaps'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4vnniw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512er'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512pf'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G4-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tbm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G5-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tbm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='athlon'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='athlon-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='core2duo'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='core2duo-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='coreduo'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='coreduo-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='n270'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='n270-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='phenom'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='phenom-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <memoryBacking supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <enum name='sourceType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>file</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>anonymous</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>memfd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </memoryBacking>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <devices>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <disk supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='diskDevice'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>disk</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>cdrom</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>floppy</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>lun</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='bus'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>ide</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>fdc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>scsi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>sata</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-non-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </disk>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <graphics supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vnc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>egl-headless</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dbus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </graphics>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <video supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='modelType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vga</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>cirrus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>none</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>bochs</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>ramfb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </video>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <hostdev supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='mode'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>subsystem</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='startupPolicy'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>default</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>mandatory</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>requisite</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>optional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='subsysType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pci</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>scsi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='capsType'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='pciBackend'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </hostdev>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <rng supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-non-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>random</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>egd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>builtin</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </rng>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <filesystem supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='driverType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>path</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>handle</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtiofs</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </filesystem>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <tpm supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tpm-tis</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tpm-crb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>emulator</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>external</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendVersion'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>2.0</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </tpm>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <redirdev supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='bus'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </redirdev>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <channel supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pty</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>unix</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </channel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <crypto supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>qemu</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>builtin</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </crypto>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <interface supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>default</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>passt</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </interface>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <panic supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>isa</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>hyperv</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </panic>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <console supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>null</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pty</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dev</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>file</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pipe</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>stdio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>udp</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tcp</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>unix</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>qemu-vdagent</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dbus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </console>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </devices>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <gic supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <vmcoreinfo supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <genid supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <backingStoreInput supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <backup supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <async-teardown supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <s390-pv supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <ps2 supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <tdx supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <sev supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <sgx supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <hyperv supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='features'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>relaxed</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vapic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>spinlocks</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vpindex</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>runtime</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>synic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>stimer</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>reset</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vendor_id</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>frequencies</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>reenlightenment</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tlbflush</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>ipi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>avic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>emsr_bitmap</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>xmm_input</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <defaults>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <spinlocks>4095</spinlocks>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <stimer_direct>on</stimer_direct>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <tlbflush_direct>on</tlbflush_direct>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <tlbflush_extended>on</tlbflush_extended>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </defaults>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </hyperv>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <launchSecurity supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: </domainCapabilities>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.364 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.369 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 26 13:01:42 np0005596060 nova_compute[246386]: <domainCapabilities>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <path>/usr/libexec/qemu-kvm</path>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <domain>kvm</domain>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <arch>x86_64</arch>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <vcpu max='4096'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <iothreads supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <os supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <enum name='firmware'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>efi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <loader supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>rom</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pflash</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='readonly'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>yes</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>no</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='secure'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>yes</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>no</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </loader>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </os>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='host-passthrough' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='hostPassthroughMigratable'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>on</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>off</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='maximum' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='maximumMigratable'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>on</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>off</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='host-model' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <vendor>AMD</vendor>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='x2apic'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc-deadline'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='hypervisor'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc_adjust'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='spec-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='stibp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='cmp_legacy'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='overflow-recov'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='succor'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='amd-ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='virt-ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='lbrv'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc-scale'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='vmcb-clean'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='flushbyasid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='pause-filter'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='pfthreshold'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='svme-addr-chk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='disable' name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='custom' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='ClearwaterForest'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ddpd-u'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sha512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='ClearwaterForest-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ddpd-u'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sha512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Dhyana-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Turin'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vp2intersect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibpb-brtype'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbpb'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='srso-user-kernel-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Turin-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vp2intersect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibpb-brtype'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbpb'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='srso-user-kernel-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-128'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-256'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-128'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-256'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v6'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v7'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='KnightsMill'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4fmaps'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4vnniw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512er'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512pf'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='KnightsMill-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4fmaps'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4vnniw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512er'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512pf'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G4-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tbm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G5-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tbm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='athlon'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='athlon-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='core2duo'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='core2duo-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='coreduo'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='coreduo-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='n270'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='n270-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='phenom'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='phenom-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <memoryBacking supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <enum name='sourceType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>file</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>anonymous</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>memfd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </memoryBacking>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <devices>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <disk supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='diskDevice'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>disk</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>cdrom</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>floppy</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>lun</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='bus'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>fdc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>scsi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>sata</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-non-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </disk>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <graphics supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vnc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>egl-headless</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dbus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </graphics>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <video supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='modelType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vga</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>cirrus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>none</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>bochs</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>ramfb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </video>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <hostdev supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='mode'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>subsystem</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='startupPolicy'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>default</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>mandatory</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>requisite</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>optional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='subsysType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pci</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>scsi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='capsType'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='pciBackend'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </hostdev>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <rng supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-non-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>random</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>egd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>builtin</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </rng>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <filesystem supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='driverType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>path</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>handle</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtiofs</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </filesystem>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <tpm supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tpm-tis</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tpm-crb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>emulator</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>external</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendVersion'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>2.0</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </tpm>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <redirdev supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='bus'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </redirdev>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <channel supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pty</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>unix</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </channel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <crypto supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>qemu</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>builtin</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </crypto>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <interface supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>default</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>passt</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </interface>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <panic supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>isa</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>hyperv</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </panic>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <console supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>null</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pty</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dev</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>file</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pipe</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>stdio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>udp</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tcp</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>unix</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>qemu-vdagent</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dbus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </console>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </devices>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <gic supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <vmcoreinfo supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <genid supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <backingStoreInput supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <backup supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <async-teardown supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <s390-pv supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <ps2 supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <tdx supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <sev supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <sgx supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <hyperv supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='features'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>relaxed</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vapic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>spinlocks</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vpindex</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>runtime</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>synic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>stimer</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>reset</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vendor_id</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>frequencies</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>reenlightenment</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tlbflush</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>ipi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>avic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>emsr_bitmap</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>xmm_input</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <defaults>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <spinlocks>4095</spinlocks>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <stimer_direct>on</stimer_direct>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <tlbflush_direct>on</tlbflush_direct>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <tlbflush_extended>on</tlbflush_extended>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </defaults>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </hyperv>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <launchSecurity supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: </domainCapabilities>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.461 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 26 13:01:42 np0005596060 nova_compute[246386]: <domainCapabilities>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <path>/usr/libexec/qemu-kvm</path>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <domain>kvm</domain>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <arch>x86_64</arch>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <vcpu max='240'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <iothreads supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <os supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <enum name='firmware'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <loader supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>rom</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pflash</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='readonly'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>yes</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>no</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='secure'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>no</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </loader>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </os>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='host-passthrough' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='hostPassthroughMigratable'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>on</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>off</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='maximum' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='maximumMigratable'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>on</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>off</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='host-model' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <vendor>AMD</vendor>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='x2apic'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc-deadline'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='hypervisor'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc_adjust'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='spec-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='stibp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='cmp_legacy'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='overflow-recov'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='succor'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='amd-ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='virt-ssbd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='lbrv'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='tsc-scale'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='vmcb-clean'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='flushbyasid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='pause-filter'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='pfthreshold'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='svme-addr-chk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <feature policy='disable' name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <mode name='custom' supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Broadwell-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cascadelake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='ClearwaterForest'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ddpd-u'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sha512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='ClearwaterForest-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ddpd-u'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sha512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm3'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sm4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Cooperlake-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Denverton-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Dhyana-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Genoa-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Milan-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Rome-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Turin'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vp2intersect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibpb-brtype'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbpb'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='srso-user-kernel-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-Turin-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amd-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='auto-ibrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vp2intersect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibpb-brtype'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='perfmon-v2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbpb'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='srso-user-kernel-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='stibp-always-on'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='EPYC-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-128'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-256'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='GraniteRapids-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-128'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-256'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx10-512'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='prefetchiti'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Haswell-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-noTSX'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v6'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Icelake-Server-v7'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='IvyBridge-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='KnightsMill'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4fmaps'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4vnniw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512er'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512pf'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='KnightsMill-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4fmaps'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-4vnniw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512er'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512pf'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G4-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tbm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Opteron_G5-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fma4'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tbm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xop'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SapphireRapids-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='amx-tile'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-bf16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-fp16'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bitalg'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrc'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fzrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='la57'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='taa-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='SierraForest-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ifma'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cmpccxadd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fbsdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='fsrs'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ibrs-all'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='intel-psfd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='lam'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mcdt-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pbrsb-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='psdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='serialize'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vaes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Client-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='hle'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='rtm'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Skylake-Server-v5'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512bw'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512cd'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512dq'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512f'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='avx512vl'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='invpcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pcid'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='pku'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='mpx'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v2'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v3'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='core-capability'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='split-lock-detect'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='Snowridge-v4'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='cldemote'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='erms'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='gfni'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdir64b'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='movdiri'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='xsaves'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='athlon'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='athlon-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='core2duo'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='core2duo-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='coreduo'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='coreduo-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='n270'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='n270-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='ss'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='phenom'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <blockers model='phenom-v1'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnow'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <feature name='3dnowext'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </blockers>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </mode>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <memoryBacking supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <enum name='sourceType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>file</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>anonymous</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <value>memfd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </memoryBacking>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <devices>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <disk supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='diskDevice'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>disk</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>cdrom</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>floppy</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>lun</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='bus'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>ide</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>fdc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>scsi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>sata</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-non-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </disk>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <graphics supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vnc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>egl-headless</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dbus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </graphics>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <video supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='modelType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vga</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>cirrus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>none</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>bochs</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>ramfb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </video>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <hostdev supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='mode'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>subsystem</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='startupPolicy'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>default</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>mandatory</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>requisite</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>optional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='subsysType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pci</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>scsi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='capsType'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='pciBackend'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </hostdev>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <rng supported='yes'>
Jan 26 13:01:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:42.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtio-non-transitional</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>random</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>egd</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>builtin</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </rng>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <filesystem supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='driverType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>path</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>handle</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>virtiofs</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </filesystem>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <tpm supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tpm-tis</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tpm-crb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>emulator</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>external</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendVersion'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>2.0</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </tpm>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <redirdev supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='bus'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>usb</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </redirdev>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <channel supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pty</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>unix</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </channel>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <crypto supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>qemu</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendModel'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>builtin</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </crypto>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <interface supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='backendType'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>default</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>passt</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </interface>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <panic supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='model'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>isa</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>hyperv</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </panic>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <console supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='type'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>null</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vc</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pty</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dev</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>file</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>pipe</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>stdio</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>udp</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tcp</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>unix</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>qemu-vdagent</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>dbus</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </console>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </devices>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <gic supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <vmcoreinfo supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <genid supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <backingStoreInput supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <backup supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <async-teardown supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <s390-pv supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <ps2 supported='yes'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <tdx supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <sev supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <sgx supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <hyperv supported='yes'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <enum name='features'>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>relaxed</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vapic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>spinlocks</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vpindex</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>runtime</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>synic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>stimer</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>reset</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>vendor_id</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>frequencies</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>reenlightenment</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>tlbflush</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>ipi</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>avic</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>emsr_bitmap</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <value>xmm_input</value>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </enum>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      <defaults>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <spinlocks>4095</spinlocks>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <stimer_direct>on</stimer_direct>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <tlbflush_direct>on</tlbflush_direct>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <tlbflush_extended>on</tlbflush_extended>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:      </defaults>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    </hyperv>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:    <launchSecurity supported='no'/>
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  </features>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: </domainCapabilities>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.565 246390 DEBUG nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.565 246390 INFO nova.virt.libvirt.host [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Secure Boot support detected#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.568 246390 INFO nova.virt.libvirt.driver [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.568 246390 INFO nova.virt.libvirt.driver [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.578 246390 DEBUG nova.virt.libvirt.driver [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 26 13:01:42 np0005596060 nova_compute[246386]:  <model>Nehalem</model>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: </cpu>
Jan 26 13:01:42 np0005596060 nova_compute[246386]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.581 246390 DEBUG nova.virt.libvirt.driver [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.633 246390 INFO nova.virt.node [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Determined node identity c679f5ea-e093-4909-bb04-0342c8551a8f from /var/lib/nova/compute_id#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.657 246390 WARNING nova.compute.manager [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Compute nodes ['c679f5ea-e093-4909-bb04-0342c8551a8f'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.697 246390 INFO nova.compute.manager [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.776 246390 WARNING nova.compute.manager [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.776 246390 DEBUG oslo_concurrency.lockutils [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.777 246390 DEBUG oslo_concurrency.lockutils [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.777 246390 DEBUG oslo_concurrency.lockutils [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.777 246390 DEBUG nova.compute.resource_tracker [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:01:42 np0005596060 nova_compute[246386]: 2026-01-26 18:01:42.777 246390 DEBUG oslo_concurrency.processutils [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:01:43 np0005596060 python3.9[247090]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 26 13:01:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:01:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/424374556' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:01:43 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 13:01:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:43.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:43 np0005596060 nova_compute[246386]: 2026-01-26 18:01:43.230 246390 DEBUG oslo_concurrency.processutils [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:01:43 np0005596060 systemd[1]: Starting libvirt nodedev daemon...
Jan 26 13:01:43 np0005596060 systemd[1]: Started libvirt nodedev daemon.
Jan 26 13:01:43 np0005596060 nova_compute[246386]: 2026-01-26 18:01:43.594 246390 WARNING nova.virt.libvirt.driver [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:01:43 np0005596060 nova_compute[246386]: 2026-01-26 18:01:43.597 246390 DEBUG nova.compute.resource_tracker [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:01:43 np0005596060 nova_compute[246386]: 2026-01-26 18:01:43.597 246390 DEBUG oslo_concurrency.lockutils [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:01:43 np0005596060 nova_compute[246386]: 2026-01-26 18:01:43.598 246390 DEBUG oslo_concurrency.lockutils [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:01:43 np0005596060 nova_compute[246386]: 2026-01-26 18:01:43.621 246390 WARNING nova.compute.resource_tracker [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] No compute node record for compute-0.ctlplane.example.com:c679f5ea-e093-4909-bb04-0342c8551a8f: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host c679f5ea-e093-4909-bb04-0342c8551a8f could not be found.#033[00m
Jan 26 13:01:43 np0005596060 nova_compute[246386]: 2026-01-26 18:01:43.648 246390 INFO nova.compute.resource_tracker [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: c679f5ea-e093-4909-bb04-0342c8551a8f#033[00m
Jan 26 13:01:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:43 np0005596060 nova_compute[246386]: 2026-01-26 18:01:43.755 246390 DEBUG nova.compute.resource_tracker [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:01:43 np0005596060 nova_compute[246386]: 2026-01-26 18:01:43.755 246390 DEBUG nova.compute.resource_tracker [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:01:44
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:01:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:01:44 np0005596060 python3.9[247338]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 13:01:44 np0005596060 systemd[1]: Stopping nova_compute container...
Jan 26 13:01:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:44.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:44 np0005596060 nova_compute[246386]: 2026-01-26 18:01:44.870 246390 INFO nova.scheduler.client.report [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] [req-c8d6146b-627d-499e-bef5-c88e90be1b42] Created resource provider record via placement API for resource provider with UUID c679f5ea-e093-4909-bb04-0342c8551a8f and name compute-0.ctlplane.example.com.#033[00m
Jan 26 13:01:44 np0005596060 nova_compute[246386]: 2026-01-26 18:01:44.905 246390 DEBUG oslo_concurrency.processutils [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:01:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:45.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:01:45 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1505198002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:01:45 np0005596060 nova_compute[246386]: 2026-01-26 18:01:45.397 246390 DEBUG oslo_concurrency.lockutils [None req-12e3e790-8c70-499b-8f5a-90753f919712 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:01:45 np0005596060 nova_compute[246386]: 2026-01-26 18:01:45.398 246390 DEBUG oslo_concurrency.lockutils [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:01:45 np0005596060 nova_compute[246386]: 2026-01-26 18:01:45.398 246390 DEBUG oslo_concurrency.lockutils [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:01:45 np0005596060 nova_compute[246386]: 2026-01-26 18:01:45.399 246390 DEBUG oslo_concurrency.lockutils [None req-749372c2-6aa2-4ef6-b073-91356bebacdd - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:01:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:45 np0005596060 virtqemud[246749]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 26 13:01:45 np0005596060 virtqemud[246749]: hostname: compute-0
Jan 26 13:01:45 np0005596060 virtqemud[246749]: End of file while reading data: Input/output error
Jan 26 13:01:45 np0005596060 systemd[1]: libpod-7569887fe02f2c8198c9b9abc7100a68d6e8915b3199e6433af5977ed385d86f.scope: Deactivated successfully.
Jan 26 13:01:45 np0005596060 systemd[1]: libpod-7569887fe02f2c8198c9b9abc7100a68d6e8915b3199e6433af5977ed385d86f.scope: Consumed 4.491s CPU time.
Jan 26 13:01:45 np0005596060 podman[247342]: 2026-01-26 18:01:45.887577174 +0000 UTC m=+1.322610620 container died 7569887fe02f2c8198c9b9abc7100a68d6e8915b3199e6433af5977ed385d86f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 13:01:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7569887fe02f2c8198c9b9abc7100a68d6e8915b3199e6433af5977ed385d86f-userdata-shm.mount: Deactivated successfully.
Jan 26 13:01:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14-merged.mount: Deactivated successfully.
Jan 26 13:01:45 np0005596060 podman[247342]: 2026-01-26 18:01:45.982211932 +0000 UTC m=+1.417245378 container cleanup 7569887fe02f2c8198c9b9abc7100a68d6e8915b3199e6433af5977ed385d86f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Jan 26 13:01:45 np0005596060 podman[247342]: nova_compute
Jan 26 13:01:46 np0005596060 podman[247395]: nova_compute
Jan 26 13:01:46 np0005596060 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 26 13:01:46 np0005596060 systemd[1]: Stopped nova_compute container.
Jan 26 13:01:46 np0005596060 systemd[1]: Starting nova_compute container...
Jan 26 13:01:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:01:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c852f78818126c03492a5f9fbce795cf9f0d30c82c6f44e03759af8663316f14/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:46 np0005596060 podman[247408]: 2026-01-26 18:01:46.161769544 +0000 UTC m=+0.085521469 container init 7569887fe02f2c8198c9b9abc7100a68d6e8915b3199e6433af5977ed385d86f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:01:46 np0005596060 podman[247408]: 2026-01-26 18:01:46.171315914 +0000 UTC m=+0.095067829 container start 7569887fe02f2c8198c9b9abc7100a68d6e8915b3199e6433af5977ed385d86f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible)
Jan 26 13:01:46 np0005596060 podman[247408]: nova_compute
Jan 26 13:01:46 np0005596060 nova_compute[247421]: + sudo -E kolla_set_configs
Jan 26 13:01:46 np0005596060 systemd[1]: Started nova_compute container.
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Validating config file
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying service configuration files
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Deleting /etc/ceph
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Creating directory /etc/ceph
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/ceph
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Writing out command to execute
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 13:01:46 np0005596060 nova_compute[247421]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 13:01:46 np0005596060 nova_compute[247421]: ++ cat /run_command
Jan 26 13:01:46 np0005596060 nova_compute[247421]: + CMD=nova-compute
Jan 26 13:01:46 np0005596060 nova_compute[247421]: + ARGS=
Jan 26 13:01:46 np0005596060 nova_compute[247421]: + sudo kolla_copy_cacerts
Jan 26 13:01:46 np0005596060 nova_compute[247421]: + [[ ! -n '' ]]
Jan 26 13:01:46 np0005596060 nova_compute[247421]: + . kolla_extend_start
Jan 26 13:01:46 np0005596060 nova_compute[247421]: + echo 'Running command: '\''nova-compute'\'''
Jan 26 13:01:46 np0005596060 nova_compute[247421]: Running command: 'nova-compute'
Jan 26 13:01:46 np0005596060 nova_compute[247421]: + umask 0022
Jan 26 13:01:46 np0005596060 nova_compute[247421]: + exec nova-compute
Jan 26 13:01:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:01:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:46.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:47.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:47 np0005596060 python3.9[247588]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 26 13:01:47 np0005596060 systemd[1]: Started libpod-conmon-a3fbc1de11280edbc92230cdb2ac506cdde0a0bfad95daae9f98bb8f03a77429.scope.
Jan 26 13:01:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:01:47 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:01:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4539d489310a5f80005aa5b707505e95bca0a3c9851c15142bc58a683594a688/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4539d489310a5f80005aa5b707505e95bca0a3c9851c15142bc58a683594a688/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4539d489310a5f80005aa5b707505e95bca0a3c9851c15142bc58a683594a688/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 26 13:01:47 np0005596060 podman[247614]: 2026-01-26 18:01:47.790443565 +0000 UTC m=+0.125179927 container init a3fbc1de11280edbc92230cdb2ac506cdde0a0bfad95daae9f98bb8f03a77429 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:01:47 np0005596060 podman[247614]: 2026-01-26 18:01:47.798051156 +0000 UTC m=+0.132787498 container start a3fbc1de11280edbc92230cdb2ac506cdde0a0bfad95daae9f98bb8f03a77429 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:01:47 np0005596060 python3.9[247588]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Applying nova statedir ownership
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 26 13:01:47 np0005596060 nova_compute_init[247635]: INFO:nova_statedir:Nova statedir ownership complete
Jan 26 13:01:47 np0005596060 systemd[1]: libpod-a3fbc1de11280edbc92230cdb2ac506cdde0a0bfad95daae9f98bb8f03a77429.scope: Deactivated successfully.
Jan 26 13:01:47 np0005596060 podman[247636]: 2026-01-26 18:01:47.864434844 +0000 UTC m=+0.034910338 container died a3fbc1de11280edbc92230cdb2ac506cdde0a0bfad95daae9f98bb8f03a77429 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=nova_compute_init, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:01:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3fbc1de11280edbc92230cdb2ac506cdde0a0bfad95daae9f98bb8f03a77429-userdata-shm.mount: Deactivated successfully.
Jan 26 13:01:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4539d489310a5f80005aa5b707505e95bca0a3c9851c15142bc58a683594a688-merged.mount: Deactivated successfully.
Jan 26 13:01:47 np0005596060 podman[247647]: 2026-01-26 18:01:47.927902889 +0000 UTC m=+0.058532812 container cleanup a3fbc1de11280edbc92230cdb2ac506cdde0a0bfad95daae9f98bb8f03a77429 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute_init)
Jan 26 13:01:47 np0005596060 systemd[1]: libpod-conmon-a3fbc1de11280edbc92230cdb2ac506cdde0a0bfad95daae9f98bb8f03a77429.scope: Deactivated successfully.
Jan 26 13:01:48 np0005596060 nova_compute[247421]: 2026-01-26 18:01:48.448 247428 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 26 13:01:48 np0005596060 nova_compute[247421]: 2026-01-26 18:01:48.448 247428 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 26 13:01:48 np0005596060 nova_compute[247421]: 2026-01-26 18:01:48.449 247428 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 26 13:01:48 np0005596060 nova_compute[247421]: 2026-01-26 18:01:48.449 247428 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 26 13:01:48 np0005596060 nova_compute[247421]: 2026-01-26 18:01:48.604 247428 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:01:48 np0005596060 nova_compute[247421]: 2026-01-26 18:01:48.638 247428 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:01:48 np0005596060 nova_compute[247421]: 2026-01-26 18:01:48.640 247428 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 26 13:01:48 np0005596060 systemd[1]: session-50.scope: Deactivated successfully.
Jan 26 13:01:48 np0005596060 systemd[1]: session-50.scope: Consumed 2min 11.255s CPU time.
Jan 26 13:01:48 np0005596060 systemd-logind[786]: Session 50 logged out. Waiting for processes to exit.
Jan 26 13:01:48 np0005596060 systemd-logind[786]: Removed session 50.
Jan 26 13:01:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:01:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:01:48.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.177 247428 INFO nova.virt.driver [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 26 13:01:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:01:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:01:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:01:49.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.289 247428 INFO nova.compute.provider_config [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.308 247428 DEBUG oslo_concurrency.lockutils [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.308 247428 DEBUG oslo_concurrency.lockutils [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.309 247428 DEBUG oslo_concurrency.lockutils [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.309 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.309 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.309 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.310 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.310 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.310 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.310 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.310 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.310 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.310 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.311 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.311 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.311 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.311 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.311 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.311 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.311 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.311 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.312 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.312 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.312 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.312 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.312 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.312 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.312 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.313 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.313 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.313 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.313 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.313 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.313 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.313 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.314 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.314 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.314 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.314 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.314 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.314 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.314 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.315 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.315 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.315 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.315 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.315 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.315 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.315 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.316 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.316 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.316 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.316 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.316 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.316 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.316 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.317 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.317 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.317 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.317 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.317 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.317 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.317 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.318 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.318 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.318 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.318 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.318 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.318 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.318 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.318 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.319 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.319 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.319 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.319 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.319 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.319 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.319 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.320 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.320 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.320 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.320 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.320 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.320 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.320 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.321 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.321 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.321 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.321 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.321 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.321 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.321 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.322 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.322 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.322 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.322 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.322 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.322 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.322 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.323 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.323 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.323 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.323 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.323 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.323 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.323 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.324 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.324 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.324 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.324 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.324 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.324 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.324 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.325 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.325 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.325 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.325 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.325 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.325 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.325 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.326 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.326 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.326 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.326 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.326 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.326 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.326 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.327 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.327 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.327 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.327 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.327 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.327 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.327 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.328 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.328 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.328 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.328 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.328 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.328 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.328 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.329 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.329 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.329 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.329 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.329 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.329 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.330 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.330 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.330 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.330 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.330 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.330 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.331 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.331 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.331 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.331 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.331 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.331 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.331 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.332 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.332 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.332 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.332 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.332 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.332 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.333 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.333 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.333 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.333 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.333 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.333 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.333 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.334 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.334 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.334 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.334 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.334 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.334 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.334 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.335 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.335 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.335 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.335 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.335 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.335 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.335 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.336 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.336 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.336 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.336 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.336 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.337 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.337 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.337 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.337 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.337 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.337 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.337 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.338 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.338 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.338 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.338 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.338 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.338 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.338 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.339 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.339 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.339 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.339 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.339 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.339 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.339 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.339 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.340 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.340 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.340 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.340 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.340 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.340 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.340 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.341 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.341 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.341 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.341 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.341 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.341 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.341 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.342 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.342 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.342 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.342 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.342 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.342 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.342 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.343 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.343 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.343 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.343 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.343 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.343 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.343 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.344 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.344 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.344 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.344 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.344 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.344 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.344 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.345 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.345 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.345 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.345 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.345 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.345 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.345 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.345 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.346 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.346 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.346 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.346 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.346 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.346 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.346 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.347 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.347 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.347 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.347 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.347 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.347 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.347 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.348 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.348 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.348 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.348 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.348 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.348 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.348 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.349 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.349 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.349 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.349 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.349 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.349 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.349 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.350 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.350 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.350 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.350 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.350 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.350 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.350 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.351 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.351 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.351 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.351 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.351 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.351 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.351 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.352 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.352 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.352 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.352 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.352 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.352 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.352 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.353 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.353 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.353 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.353 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.353 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.353 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.354 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.354 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.354 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.354 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.354 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.354 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.354 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.355 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.355 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.355 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.355 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.355 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.355 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.355 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.356 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.356 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.356 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.356 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.356 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.356 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.356 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.357 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.357 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.357 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.357 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.357 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.357 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.357 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.358 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.358 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.358 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.358 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.358 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.359 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.359 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.359 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.359 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.359 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.359 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.359 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.360 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.360 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.360 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.360 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.360 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.360 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.361 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.361 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.361 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.361 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.361 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.361 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.362 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.362 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.362 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.362 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.362 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.362 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.363 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.363 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.363 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.363 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.363 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.363 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.363 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.364 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.364 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.364 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.364 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.364 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.364 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.364 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.365 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.365 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.365 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.365 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.365 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.365 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.365 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.366 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.366 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.366 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.366 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.366 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.366 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.366 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.367 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.367 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.367 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.367 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.367 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.367 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.367 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.368 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.368 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.368 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.368 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.368 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.368 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.368 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.369 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.369 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.369 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.369 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.369 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.369 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.369 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.370 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.370 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.370 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.370 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.370 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.370 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.371 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.371 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.371 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.371 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.371 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.371 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.371 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.372 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.372 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.372 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.372 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.372 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.372 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.372 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.372 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.373 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.373 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.373 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.373 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.373 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.373 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.373 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.374 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.374 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.374 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.374 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.374 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.374 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.374 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.375 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.375 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.375 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.375 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.375 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.375 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.376 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.376 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.376 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.376 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.376 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.376 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.376 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.377 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.377 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.377 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.377 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.377 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.377 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.377 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.378 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.378 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.378 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.378 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.378 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.378 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.379 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.379 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.379 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.379 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.379 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.379 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.379 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.380 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.380 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.380 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.380 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.380 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.380 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.380 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.381 247428 WARNING oslo_config.cfg [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 26 13:01:49 np0005596060 nova_compute[247421]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 26 13:01:49 np0005596060 nova_compute[247421]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 26 13:01:49 np0005596060 nova_compute[247421]: and ``live_migration_inbound_addr`` respectively.
Jan 26 13:01:49 np0005596060 nova_compute[247421]: ).  Its value may be silently ignored in the future.#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.381 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.381 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.381 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.381 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.381 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.382 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.382 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.382 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.382 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.382 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.382 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.383 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.383 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.383 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.383 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.383 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.383 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.383 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.384 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.rbd_secret_uuid        = d4cd1917-5876-51b6-bc64-65a16199754d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.384 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.384 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.384 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.384 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.384 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.385 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.385 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.385 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.385 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.385 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.385 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.385 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.386 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.386 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.386 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.386 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.386 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.386 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.387 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.387 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.387 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.387 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.387 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.387 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.388 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.388 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.388 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.388 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.388 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.388 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.389 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.389 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.389 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.389 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.389 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.389 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.389 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.390 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.390 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.390 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.390 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.390 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.390 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.390 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.391 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.391 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.391 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.391 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.391 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.392 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.392 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.392 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.392 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.392 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.392 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.392 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.393 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.393 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.393 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.393 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.393 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.393 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.393 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.394 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.394 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.394 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.394 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.394 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.394 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.395 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.395 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.395 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.395 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.395 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.395 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.395 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.396 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.396 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.396 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.396 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.396 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.396 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.396 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.397 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.397 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.397 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.397 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.397 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.397 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.397 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.398 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.398 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.398 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.398 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.398 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.398 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.398 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.398 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.399 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.399 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.399 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.399 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.399 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.399 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.399 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.400 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.400 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.400 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.400 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.400 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.400 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.400 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.401 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.401 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.401 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.401 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.401 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.401 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.402 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.402 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.402 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.402 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.402 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.402 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.403 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.403 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.403 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.403 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.403 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.403 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.403 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.404 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.404 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.404 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.404 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.404 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.404 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.404 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.405 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.405 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.405 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.405 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.405 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.405 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.406 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.406 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.406 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.406 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.406 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.406 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.406 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.406 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.407 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.407 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.407 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.407 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.407 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.407 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.407 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.408 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.408 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.408 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.408 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.408 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.408 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.409 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.409 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.409 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.409 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.409 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.409 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.409 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.410 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.410 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.410 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.410 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.410 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.410 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.411 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.411 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.411 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.411 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.411 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.411 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.411 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.412 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.412 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.412 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.412 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.412 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.412 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.412 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.413 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.413 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.413 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.413 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.413 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.413 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.414 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.414 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.414 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.414 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.414 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.414 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.415 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.415 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.415 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.415 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.415 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.415 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.416 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.416 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.416 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.416 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.416 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.417 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.417 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.417 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.417 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.417 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.418 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.418 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.418 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.418 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.418 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.418 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.419 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.419 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.419 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.419 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.420 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.420 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.420 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.420 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.421 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.421 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.421 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.421 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.421 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.422 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.422 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.422 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.422 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.422 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.423 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.423 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.423 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.423 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.423 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.424 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.424 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.424 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.424 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.424 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.424 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.425 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.425 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.425 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.425 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.425 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.425 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.426 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.426 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.426 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.426 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.426 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.426 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.427 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.427 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.427 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.427 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.427 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.427 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.428 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.428 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.428 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.428 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.428 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.429 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.429 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.429 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.429 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.429 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.430 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.430 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.430 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.430 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.430 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.430 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.431 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.431 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.431 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.431 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.431 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.431 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.432 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.432 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.432 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.432 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.432 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.432 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.433 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.433 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.433 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.433 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.433 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.433 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.434 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.434 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.434 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.434 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.434 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.434 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.434 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.435 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.435 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.435 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.435 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.435 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.435 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.435 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.436 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.436 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.436 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.436 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.436 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.436 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.436 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.437 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.437 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.437 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.437 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.437 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.437 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.437 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.438 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.438 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.438 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.438 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.438 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.438 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.438 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.438 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.439 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.439 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.439 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.439 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.439 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.439 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.439 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.440 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.440 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.440 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.440 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.440 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.440 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.440 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.441 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.441 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.441 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.441 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.441 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.441 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.441 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.442 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.442 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.442 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.442 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.442 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.442 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.442 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.442 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.443 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.443 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.443 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.443 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.443 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.443 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.443 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.444 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.444 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.444 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.444 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.444 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.444 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.444 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.445 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.445 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.445 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.445 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.445 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.445 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.445 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.446 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.446 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.446 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.446 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.446 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.446 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.446 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.446 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.447 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.447 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.447 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.447 247428 DEBUG oslo_service.service [None req-ce6072d8-5d37-4733-963c-4ddf0eec2139 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.448 247428 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.461 247428 INFO nova.virt.node [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Determined node identity c679f5ea-e093-4909-bb04-0342c8551a8f from /var/lib/nova/compute_id#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.462 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.462 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.463 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.463 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.480 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f96e7f98340> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.482 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f96e7f98340> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.483 247428 INFO nova.virt.libvirt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.490 247428 INFO nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Libvirt host capabilities <capabilities>
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <host>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <uuid>d27b7a41-30de-40e4-9f10-b4e4f5902919</uuid>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <cpu>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <arch>x86_64</arch>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model>EPYC-Rome-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <vendor>AMD</vendor>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <microcode version='16777317'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <signature family='23' model='49' stepping='0'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='x2apic'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='tsc-deadline'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='osxsave'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='hypervisor'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='tsc_adjust'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='spec-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='stibp'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='arch-capabilities'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='ssbd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='cmp_legacy'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='topoext'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='virt-ssbd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='lbrv'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='tsc-scale'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='vmcb-clean'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='pause-filter'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='pfthreshold'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='svme-addr-chk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='rdctl-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='skip-l1dfl-vmentry'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='mds-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature name='pschange-mc-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <pages unit='KiB' size='4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <pages unit='KiB' size='2048'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <pages unit='KiB' size='1048576'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </cpu>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <power_management>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <suspend_mem/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </power_management>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <iommu support='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <migration_features>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <live/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <uri_transports>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <uri_transport>tcp</uri_transport>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <uri_transport>rdma</uri_transport>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </uri_transports>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </migration_features>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <topology>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <cells num='1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <cell id='0'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:          <memory unit='KiB'>7864316</memory>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:          <pages unit='KiB' size='4'>1966079</pages>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:          <pages unit='KiB' size='2048'>0</pages>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:          <distances>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:            <sibling id='0' value='10'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:          </distances>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:          <cpus num='8'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:          </cpus>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        </cell>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </cells>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </topology>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <cache>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </cache>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <secmodel>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model>selinux</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <doi>0</doi>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </secmodel>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <secmodel>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model>dac</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <doi>0</doi>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </secmodel>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  </host>
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <guest>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <os_type>hvm</os_type>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <arch name='i686'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <wordsize>32</wordsize>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <domain type='qemu'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <domain type='kvm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </arch>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <features>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <pae/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <nonpae/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <acpi default='on' toggle='yes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <apic default='on' toggle='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <cpuselection/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <deviceboot/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <disksnapshot default='on' toggle='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <externalSnapshot/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </features>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  </guest>
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <guest>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <os_type>hvm</os_type>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <arch name='x86_64'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <wordsize>64</wordsize>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <domain type='qemu'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <domain type='kvm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </arch>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <features>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <acpi default='on' toggle='yes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <apic default='on' toggle='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <cpuselection/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <deviceboot/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <disksnapshot default='on' toggle='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <externalSnapshot/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </features>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  </guest>
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 
Jan 26 13:01:49 np0005596060 nova_compute[247421]: </capabilities>
Jan 26 13:01:49 np0005596060 nova_compute[247421]: #033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.495 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.501 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 26 13:01:49 np0005596060 nova_compute[247421]: <domainCapabilities>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <path>/usr/libexec/qemu-kvm</path>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <domain>kvm</domain>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <arch>i686</arch>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <vcpu max='240'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <iothreads supported='yes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <os supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <enum name='firmware'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <loader supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='type'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>rom</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>pflash</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='readonly'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>yes</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>no</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='secure'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>no</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </loader>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <cpu>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <mode name='host-passthrough' supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='hostPassthroughMigratable'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>on</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>off</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </mode>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <mode name='maximum' supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='maximumMigratable'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>on</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>off</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </mode>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <mode name='host-model' supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <vendor>AMD</vendor>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='x2apic'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='tsc-deadline'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='hypervisor'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='tsc_adjust'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='spec-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='stibp'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='ssbd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='cmp_legacy'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='overflow-recov'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='succor'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='ibrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='amd-ssbd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='virt-ssbd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='lbrv'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='tsc-scale'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='vmcb-clean'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='flushbyasid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='pause-filter'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='pfthreshold'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='svme-addr-chk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='disable' name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </mode>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <mode name='custom' supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-noTSX'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-v5'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='ClearwaterForest'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bhi-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cmpccxadd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ddpd-u'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='intel-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='lam'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='prefetchiti'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sha512'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sm3'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sm4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='ClearwaterForest-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bhi-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cmpccxadd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ddpd-u'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='intel-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='lam'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='prefetchiti'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sha512'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sm3'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sm4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cooperlake'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cooperlake-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cooperlake-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Denverton'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mpx'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Denverton-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mpx'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Denverton-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Denverton-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Dhyana-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Genoa'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amd-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='auto-ibrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='stibp-always-on'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Genoa-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amd-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='auto-ibrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='stibp-always-on'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Genoa-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amd-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='auto-ibrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='perfmon-v2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='stibp-always-on'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Milan'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Milan-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Milan-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amd-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='stibp-always-on'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Milan-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amd-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='stibp-always-on'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Rome'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Rome-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Rome-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Rome-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Turin'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amd-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='auto-ibrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vp2intersect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibpb-brtype'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='perfmon-v2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='prefetchi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbpb'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='srso-user-kernel-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='stibp-always-on'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-Turin-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amd-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='auto-ibrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vp2intersect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fs-gs-base-ns'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibpb-brtype'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='no-nested-data-bp'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='null-sel-clr-base'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='perfmon-v2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='prefetchi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbpb'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='srso-user-kernel-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='stibp-always-on'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='EPYC-v5'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='GraniteRapids'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-tile'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrc'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fzrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='prefetchiti'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='GraniteRapids-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-tile'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrc'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fzrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='prefetchiti'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='GraniteRapids-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-tile'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx10'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx10-128'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx10-256'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx10-512'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrc'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fzrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='prefetchiti'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='GraniteRapids-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-tile'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx10'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx10-128'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx10-256'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx10-512'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrc'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fzrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='prefetchiti'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Haswell'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Haswell-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Haswell-noTSX'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Haswell-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Haswell-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Haswell-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Haswell-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Icelake-Server'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Icelake-Server-noTSX'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Icelake-Server-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Icelake-Server-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Icelake-Server-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Icelake-Server-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Icelake-Server-v5'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Icelake-Server-v6'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Icelake-Server-v7'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='IvyBridge'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='IvyBridge-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='IvyBridge-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='IvyBridge-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='KnightsMill'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-4fmaps'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-4vnniw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512er'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512pf'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='KnightsMill-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-4fmaps'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-4vnniw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512er'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512pf'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Opteron_G4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fma4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xop'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Opteron_G4-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fma4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xop'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Opteron_G5'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fma4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tbm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xop'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Opteron_G5-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fma4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tbm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xop'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='SapphireRapids'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-tile'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrc'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fzrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='SapphireRapids-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-tile'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrc'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fzrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='SapphireRapids-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-tile'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrc'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fzrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='SapphireRapids-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-tile'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrc'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fzrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='SapphireRapids-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='amx-tile'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-fp16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-vpopcntdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bitalg'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vbmi2'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrc'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fzrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='la57'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='tsx-ldtrk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='SierraForest'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cmpccxadd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='SierraForest-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cmpccxadd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='SierraForest-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cmpccxadd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='intel-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='lam'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='SierraForest-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cmpccxadd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='intel-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='lam'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Client'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Client-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Client-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Client-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Client-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Client-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Server'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Server-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Server-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Server-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Server-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Server-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Skylake-Server-v5'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Snowridge'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='core-capability'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mpx'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='split-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Snowridge-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='core-capability'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mpx'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='split-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Snowridge-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='core-capability'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='split-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Snowridge-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='core-capability'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='split-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Snowridge-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='athlon'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='3dnow'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='3dnowext'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='athlon-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='3dnow'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='3dnowext'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='core2duo'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='core2duo-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='coreduo'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='coreduo-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='n270'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='n270-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='phenom'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='3dnow'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='3dnowext'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='phenom-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='3dnow'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='3dnowext'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </mode>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <memoryBacking supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <enum name='sourceType'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <value>file</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <value>anonymous</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <value>memfd</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  </memoryBacking>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <disk supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='diskDevice'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>disk</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>cdrom</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>floppy</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>lun</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='bus'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>ide</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>fdc</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>scsi</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>virtio</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>usb</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>sata</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='model'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>virtio</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>virtio-transitional</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>virtio-non-transitional</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <graphics supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='type'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>vnc</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>egl-headless</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>dbus</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </graphics>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <video supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='modelType'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>vga</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>cirrus</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>virtio</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>none</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>bochs</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>ramfb</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <hostdev supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='mode'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>subsystem</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='startupPolicy'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>default</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>mandatory</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>requisite</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>optional</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='subsysType'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>usb</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>pci</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>scsi</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='capsType'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='pciBackend'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </hostdev>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <rng supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='model'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>virtio</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>virtio-transitional</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>virtio-non-transitional</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='backendModel'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>random</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>egd</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>builtin</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <filesystem supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='driverType'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>path</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>handle</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>virtiofs</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </filesystem>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <tpm supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='model'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>tpm-tis</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>tpm-crb</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='backendModel'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>emulator</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>external</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='backendVersion'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>2.0</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </tpm>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <redirdev supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='bus'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>usb</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </redirdev>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <channel supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='type'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>pty</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>unix</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </channel>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <crypto supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='model'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='type'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>qemu</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='backendModel'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>builtin</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </crypto>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <interface supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='backendType'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>default</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>passt</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <panic supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='model'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>isa</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>hyperv</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </panic>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <console supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='type'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>null</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>vc</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>pty</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>dev</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>file</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>pipe</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>stdio</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>udp</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>tcp</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>unix</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>qemu-vdagent</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>dbus</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </console>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <gic supported='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <vmcoreinfo supported='yes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <genid supported='yes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <backingStoreInput supported='yes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <backup supported='yes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <async-teardown supported='yes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <s390-pv supported='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <ps2 supported='yes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <tdx supported='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <sev supported='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <sgx supported='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <hyperv supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='features'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>relaxed</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>vapic</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>spinlocks</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>vpindex</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>runtime</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>synic</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>stimer</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>reset</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>vendor_id</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>frequencies</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>reenlightenment</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>tlbflush</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>ipi</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>avic</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>emsr_bitmap</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>xmm_input</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <defaults>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <spinlocks>4095</spinlocks>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <stimer_direct>on</stimer_direct>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <tlbflush_direct>on</tlbflush_direct>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <tlbflush_extended>on</tlbflush_extended>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </defaults>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </hyperv>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <launchSecurity supported='no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:01:49 np0005596060 nova_compute[247421]: </domainCapabilities>
Jan 26 13:01:49 np0005596060 nova_compute[247421]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.506 247428 DEBUG nova.virt.libvirt.volume.mount [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 26 13:01:49 np0005596060 nova_compute[247421]: 2026-01-26 18:01:49.510 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 26 13:01:49 np0005596060 nova_compute[247421]: <domainCapabilities>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <path>/usr/libexec/qemu-kvm</path>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <domain>kvm</domain>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <arch>i686</arch>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <vcpu max='4096'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <iothreads supported='yes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <os supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <enum name='firmware'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <loader supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='type'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>rom</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>pflash</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='readonly'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>yes</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>no</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='secure'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>no</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </loader>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:  <cpu>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <mode name='host-passthrough' supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='hostPassthroughMigratable'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>on</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>off</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </mode>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <mode name='maximum' supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <enum name='maximumMigratable'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>on</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <value>off</value>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </enum>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </mode>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <mode name='host-model' supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <vendor>AMD</vendor>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='x2apic'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='tsc-deadline'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='hypervisor'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='tsc_adjust'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='spec-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='stibp'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='ssbd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='cmp_legacy'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='overflow-recov'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='succor'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='ibrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='amd-ssbd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='virt-ssbd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='lbrv'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='tsc-scale'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='vmcb-clean'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='flushbyasid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='pause-filter'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='pfthreshold'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='svme-addr-chk'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <feature policy='disable' name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    </mode>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:    <mode name='custom' supported='yes'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-noTSX'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Broadwell-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-v2'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-v3'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-v4'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cascadelake-Server-v5'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='ClearwaterForest'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bhi-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cmpccxadd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ddpd-u'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='intel-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='lam'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='prefetchiti'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sha512'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sm3'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sm4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='ClearwaterForest-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ifma'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-ne-convert'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx-vnni-int8'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bhi-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bhi-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='bus-lock-detect'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cldemote'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='cmpccxadd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ddpd-u'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fbsdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='fsrs'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='gfni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='intel-psfd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ipred-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='lam'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='mcdt-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdir64b'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='movdiri'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pbrsb-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='prefetchiti'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='psdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rrsba-ctrl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sbdr-ssdp-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='serialize'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sha512'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sm3'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='sm4'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ss'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vaes'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='vpclmulqdq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='xsaves'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cooperlake'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='rtm'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='taa-no'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      </blockers>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:      <blockers model='Cooperlake-v1'>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512-bf16'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512bw'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512cd'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512dq'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512f'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vl'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='avx512vnni'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='erms'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='hle'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='ibrs-all'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='invpcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pcid'/>
Jan 26 13:01:49 np0005596060 nova_compute[247421]:        <feature name='pku'/>
Jan 26 13:02:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:34.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:35 np0005596060 rsyslogd[1005]: imjournal: 6240 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 26 13:02:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:02:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:35.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:02:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:02:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:36.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:02:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:37.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:02:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:38.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:39.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:40.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:41.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:42.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:02:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:02:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:43.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:02:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:02:44
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', 'default.rgw.control', '.mgr', 'backups', 'volumes', 'images', '.rgw.root', 'default.rgw.meta']
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:02:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:02:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:44.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:45.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:46.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:47.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.653 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.654 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.655 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.655 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.689 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.689 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.689 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.690 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.690 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.690 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.690 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.690 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.691 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.723 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.723 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.724 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.724 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:02:48 np0005596060 nova_compute[247421]: 2026-01-26 18:02:48.725 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:02:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:48.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:49.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:02:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2026448749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:02:49 np0005596060 nova_compute[247421]: 2026-01-26 18:02:49.350 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:02:49 np0005596060 nova_compute[247421]: 2026-01-26 18:02:49.593 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:02:49 np0005596060 nova_compute[247421]: 2026-01-26 18:02:49.595 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5238MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:02:49 np0005596060 nova_compute[247421]: 2026-01-26 18:02:49.595 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:02:49 np0005596060 nova_compute[247421]: 2026-01-26 18:02:49.595 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:02:49 np0005596060 nova_compute[247421]: 2026-01-26 18:02:49.751 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:02:49 np0005596060 nova_compute[247421]: 2026-01-26 18:02:49.752 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:02:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:49 np0005596060 nova_compute[247421]: 2026-01-26 18:02:49.882 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:02:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:02:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062975754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:02:50 np0005596060 nova_compute[247421]: 2026-01-26 18:02:50.460 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:02:50 np0005596060 nova_compute[247421]: 2026-01-26 18:02:50.466 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:02:50 np0005596060 nova_compute[247421]: 2026-01-26 18:02:50.499 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:02:50 np0005596060 nova_compute[247421]: 2026-01-26 18:02:50.500 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:02:50 np0005596060 nova_compute[247421]: 2026-01-26 18:02:50.501 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:02:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:50.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:02:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:51.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:02:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:52 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 26 13:02:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:02:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:52.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:02:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:02:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:53.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 26 13:02:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:54 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 26 13:02:54 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 26 13:02:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:54.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:55 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 26 13:02:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:55.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:02:55 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 26 13:02:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:56.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:02:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:57.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:02:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 30 op/s
Jan 26 13:02:57 np0005596060 podman[249062]: 2026-01-26 18:02:57.797939084 +0000 UTC m=+0.059145152 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 26 13:02:57 np0005596060 podman[249063]: 2026-01-26 18:02:57.834454073 +0000 UTC m=+0.094315288 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:02:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:02:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:02:58.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:02:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:02:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:02:59.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:02:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 0 B/s wr, 30 op/s
Jan 26 13:03:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:00.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:01.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 0 B/s wr, 33 op/s
Jan 26 13:03:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:02.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:03.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:03:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Jan 26 13:03:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:04.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:05.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Jan 26 13:03:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:06.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:07.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 0 B/s wr, 139 op/s
Jan 26 13:03:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:08.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:03:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 26 13:03:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 13:03:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 26 13:03:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 13:03:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:03:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:03:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:09.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 0 B/s wr, 108 op/s
Jan 26 13:03:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:03:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:10.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:03:10 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 60c5e256-be75-40dc-bc00-0aee7e6aa804 does not exist
Jan 26 13:03:10 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 25aafc9b-a7f3-4173-a559-5e68b64be954 does not exist
Jan 26 13:03:10 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ab57d91b-b069-4c59-aae3-6d6f298d3e47 does not exist
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:03:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:03:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:03:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:03:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:03:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:11.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:11 np0005596060 podman[249432]: 2026-01-26 18:03:11.506959174 +0000 UTC m=+0.044303223 container create 4cc28dce3654594a3dff5987768f356ea4a0259ef06f573b81d7a564af9c0ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:03:11 np0005596060 systemd[1]: Started libpod-conmon-4cc28dce3654594a3dff5987768f356ea4a0259ef06f573b81d7a564af9c0ed3.scope.
Jan 26 13:03:11 np0005596060 podman[249432]: 2026-01-26 18:03:11.486661749 +0000 UTC m=+0.024005818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:03:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:03:11 np0005596060 podman[249432]: 2026-01-26 18:03:11.606554612 +0000 UTC m=+0.143898681 container init 4cc28dce3654594a3dff5987768f356ea4a0259ef06f573b81d7a564af9c0ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 26 13:03:11 np0005596060 podman[249432]: 2026-01-26 18:03:11.61452586 +0000 UTC m=+0.151869909 container start 4cc28dce3654594a3dff5987768f356ea4a0259ef06f573b81d7a564af9c0ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:03:11 np0005596060 podman[249432]: 2026-01-26 18:03:11.619463033 +0000 UTC m=+0.156807082 container attach 4cc28dce3654594a3dff5987768f356ea4a0259ef06f573b81d7a564af9c0ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 26 13:03:11 np0005596060 systemd[1]: libpod-4cc28dce3654594a3dff5987768f356ea4a0259ef06f573b81d7a564af9c0ed3.scope: Deactivated successfully.
Jan 26 13:03:11 np0005596060 boring_perlman[249449]: 167 167
Jan 26 13:03:11 np0005596060 podman[249432]: 2026-01-26 18:03:11.625109463 +0000 UTC m=+0.162453542 container died 4cc28dce3654594a3dff5987768f356ea4a0259ef06f573b81d7a564af9c0ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:03:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c43b4526ff3e7fde31e4d4ad092f11914c63afd2316215afbc50a87751ab2519-merged.mount: Deactivated successfully.
Jan 26 13:03:11 np0005596060 podman[249432]: 2026-01-26 18:03:11.671523858 +0000 UTC m=+0.208867907 container remove 4cc28dce3654594a3dff5987768f356ea4a0259ef06f573b81d7a564af9c0ed3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_perlman, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:03:11 np0005596060 systemd[1]: libpod-conmon-4cc28dce3654594a3dff5987768f356ea4a0259ef06f573b81d7a564af9c0ed3.scope: Deactivated successfully.
Jan 26 13:03:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 0 B/s wr, 116 op/s
Jan 26 13:03:11 np0005596060 podman[249475]: 2026-01-26 18:03:11.822187016 +0000 UTC m=+0.027247639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:03:12 np0005596060 podman[249475]: 2026-01-26 18:03:12.332533281 +0000 UTC m=+0.537593914 container create 1b75036bca917a0bd379e030e0b8ebbb423f69bae7a50d07d43e63f950d0743b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 13:03:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:12.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:13 np0005596060 systemd[1]: Started libpod-conmon-1b75036bca917a0bd379e030e0b8ebbb423f69bae7a50d07d43e63f950d0743b.scope.
Jan 26 13:03:13 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:03:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96545108723a50ac2b24e562b6d241bea3c85bce5464b7ff50bcc54a6dc79572/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96545108723a50ac2b24e562b6d241bea3c85bce5464b7ff50bcc54a6dc79572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96545108723a50ac2b24e562b6d241bea3c85bce5464b7ff50bcc54a6dc79572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96545108723a50ac2b24e562b6d241bea3c85bce5464b7ff50bcc54a6dc79572/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96545108723a50ac2b24e562b6d241bea3c85bce5464b7ff50bcc54a6dc79572/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:13.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:13 np0005596060 podman[249475]: 2026-01-26 18:03:13.527111322 +0000 UTC m=+1.732171935 container init 1b75036bca917a0bd379e030e0b8ebbb423f69bae7a50d07d43e63f950d0743b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:03:13 np0005596060 podman[249475]: 2026-01-26 18:03:13.542590857 +0000 UTC m=+1.747651500 container start 1b75036bca917a0bd379e030e0b8ebbb423f69bae7a50d07d43e63f950d0743b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ritchie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 26 13:03:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 0 B/s wr, 113 op/s
Jan 26 13:03:13 np0005596060 podman[249475]: 2026-01-26 18:03:13.935764838 +0000 UTC m=+2.140825471 container attach 1b75036bca917a0bd379e030e0b8ebbb423f69bae7a50d07d43e63f950d0743b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ritchie, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:03:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:03:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:03:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:03:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:03:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:03:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:03:14 np0005596060 quizzical_ritchie[249491]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:03:14 np0005596060 quizzical_ritchie[249491]: --> relative data size: 1.0
Jan 26 13:03:14 np0005596060 quizzical_ritchie[249491]: --> All data devices are unavailable
Jan 26 13:03:14 np0005596060 systemd[1]: libpod-1b75036bca917a0bd379e030e0b8ebbb423f69bae7a50d07d43e63f950d0743b.scope: Deactivated successfully.
Jan 26 13:03:14 np0005596060 podman[249475]: 2026-01-26 18:03:14.36955123 +0000 UTC m=+2.574611863 container died 1b75036bca917a0bd379e030e0b8ebbb423f69bae7a50d07d43e63f950d0743b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 13:03:14 np0005596060 systemd[1]: var-lib-containers-storage-overlay-96545108723a50ac2b24e562b6d241bea3c85bce5464b7ff50bcc54a6dc79572-merged.mount: Deactivated successfully.
Jan 26 13:03:14 np0005596060 podman[249475]: 2026-01-26 18:03:14.569458844 +0000 UTC m=+2.774519427 container remove 1b75036bca917a0bd379e030e0b8ebbb423f69bae7a50d07d43e63f950d0743b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:03:14 np0005596060 systemd[1]: libpod-conmon-1b75036bca917a0bd379e030e0b8ebbb423f69bae7a50d07d43e63f950d0743b.scope: Deactivated successfully.
Jan 26 13:03:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:03:14.734 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:03:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:03:14.735 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:03:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:03:14.735 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:03:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:14.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:15 np0005596060 podman[249661]: 2026-01-26 18:03:15.254488386 +0000 UTC m=+0.063954042 container create 61da4ef196cb87b5fbb547395ed129c423d174ccdaaef16d723b4ed3e91af8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 13:03:15 np0005596060 systemd[1]: Started libpod-conmon-61da4ef196cb87b5fbb547395ed129c423d174ccdaaef16d723b4ed3e91af8aa.scope.
Jan 26 13:03:15 np0005596060 podman[249661]: 2026-01-26 18:03:15.221737871 +0000 UTC m=+0.031203517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:03:15 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:03:15 np0005596060 podman[249661]: 2026-01-26 18:03:15.334829615 +0000 UTC m=+0.144295251 container init 61da4ef196cb87b5fbb547395ed129c423d174ccdaaef16d723b4ed3e91af8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:03:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:15.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:15 np0005596060 podman[249661]: 2026-01-26 18:03:15.342186348 +0000 UTC m=+0.151651954 container start 61da4ef196cb87b5fbb547395ed129c423d174ccdaaef16d723b4ed3e91af8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:03:15 np0005596060 podman[249661]: 2026-01-26 18:03:15.346716501 +0000 UTC m=+0.156182127 container attach 61da4ef196cb87b5fbb547395ed129c423d174ccdaaef16d723b4ed3e91af8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:03:15 np0005596060 nifty_northcutt[249677]: 167 167
Jan 26 13:03:15 np0005596060 systemd[1]: libpod-61da4ef196cb87b5fbb547395ed129c423d174ccdaaef16d723b4ed3e91af8aa.scope: Deactivated successfully.
Jan 26 13:03:15 np0005596060 podman[249661]: 2026-01-26 18:03:15.349007358 +0000 UTC m=+0.158472984 container died 61da4ef196cb87b5fbb547395ed129c423d174ccdaaef16d723b4ed3e91af8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:03:15 np0005596060 systemd[1]: var-lib-containers-storage-overlay-378a858d450be9b35bad53901aec392d3ab5b47a794ecfd51594f2b3204ba92e-merged.mount: Deactivated successfully.
Jan 26 13:03:15 np0005596060 podman[249661]: 2026-01-26 18:03:15.401958055 +0000 UTC m=+0.211423671 container remove 61da4ef196cb87b5fbb547395ed129c423d174ccdaaef16d723b4ed3e91af8aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:03:15 np0005596060 systemd[1]: libpod-conmon-61da4ef196cb87b5fbb547395ed129c423d174ccdaaef16d723b4ed3e91af8aa.scope: Deactivated successfully.
Jan 26 13:03:15 np0005596060 podman[249702]: 2026-01-26 18:03:15.575727117 +0000 UTC m=+0.049292516 container create 0dee1038fe2cd6050942bf1b192ace3aee5dca0b55f0747ba95517c7e658bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 13:03:15 np0005596060 systemd[1]: Started libpod-conmon-0dee1038fe2cd6050942bf1b192ace3aee5dca0b55f0747ba95517c7e658bf67.scope.
Jan 26 13:03:15 np0005596060 podman[249702]: 2026-01-26 18:03:15.553758501 +0000 UTC m=+0.027323930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:03:15 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:03:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce5590c918523ce1370dd84bbbba67ddae88f545ca6d0318e57cbfbcfe7de17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce5590c918523ce1370dd84bbbba67ddae88f545ca6d0318e57cbfbcfe7de17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce5590c918523ce1370dd84bbbba67ddae88f545ca6d0318e57cbfbcfe7de17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ce5590c918523ce1370dd84bbbba67ddae88f545ca6d0318e57cbfbcfe7de17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:15 np0005596060 podman[249702]: 2026-01-26 18:03:15.682395121 +0000 UTC m=+0.155960540 container init 0dee1038fe2cd6050942bf1b192ace3aee5dca0b55f0747ba95517c7e658bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:03:15 np0005596060 podman[249702]: 2026-01-26 18:03:15.690903483 +0000 UTC m=+0.164468882 container start 0dee1038fe2cd6050942bf1b192ace3aee5dca0b55f0747ba95517c7e658bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 13:03:15 np0005596060 podman[249702]: 2026-01-26 18:03:15.694579394 +0000 UTC m=+0.168144793 container attach 0dee1038fe2cd6050942bf1b192ace3aee5dca0b55f0747ba95517c7e658bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 13:03:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 97 op/s
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]: {
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:    "1": [
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:        {
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "devices": [
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "/dev/loop3"
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            ],
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "lv_name": "ceph_lv0",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "lv_size": "7511998464",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "name": "ceph_lv0",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "tags": {
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.cluster_name": "ceph",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.crush_device_class": "",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.encrypted": "0",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.osd_id": "1",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.type": "block",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:                "ceph.vdo": "0"
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            },
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "type": "block",
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:            "vg_name": "ceph_vg0"
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:        }
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]:    ]
Jan 26 13:03:16 np0005596060 vigorous_archimedes[249718]: }
Jan 26 13:03:16 np0005596060 systemd[1]: libpod-0dee1038fe2cd6050942bf1b192ace3aee5dca0b55f0747ba95517c7e658bf67.scope: Deactivated successfully.
Jan 26 13:03:16 np0005596060 podman[249702]: 2026-01-26 18:03:16.531128956 +0000 UTC m=+1.004694355 container died 0dee1038fe2cd6050942bf1b192ace3aee5dca0b55f0747ba95517c7e658bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:03:16 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3ce5590c918523ce1370dd84bbbba67ddae88f545ca6d0318e57cbfbcfe7de17-merged.mount: Deactivated successfully.
Jan 26 13:03:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:16.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:16 np0005596060 podman[249702]: 2026-01-26 18:03:16.901916721 +0000 UTC m=+1.375482120 container remove 0dee1038fe2cd6050942bf1b192ace3aee5dca0b55f0747ba95517c7e658bf67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_archimedes, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:03:16 np0005596060 systemd[1]: libpod-conmon-0dee1038fe2cd6050942bf1b192ace3aee5dca0b55f0747ba95517c7e658bf67.scope: Deactivated successfully.
Jan 26 13:03:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:17.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:17 np0005596060 podman[249884]: 2026-01-26 18:03:17.580638077 +0000 UTC m=+0.049296258 container create c8619e6ebb077d292f6122bcd4f6ab0f2401cc54b72311c3c3c90d18133d20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kepler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 13:03:17 np0005596060 systemd[1]: Started libpod-conmon-c8619e6ebb077d292f6122bcd4f6ab0f2401cc54b72311c3c3c90d18133d20e2.scope.
Jan 26 13:03:17 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:03:17 np0005596060 podman[249884]: 2026-01-26 18:03:17.652754901 +0000 UTC m=+0.121413112 container init c8619e6ebb077d292f6122bcd4f6ab0f2401cc54b72311c3c3c90d18133d20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kepler, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:03:17 np0005596060 podman[249884]: 2026-01-26 18:03:17.658445343 +0000 UTC m=+0.127103564 container start c8619e6ebb077d292f6122bcd4f6ab0f2401cc54b72311c3c3c90d18133d20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:03:17 np0005596060 podman[249884]: 2026-01-26 18:03:17.564002623 +0000 UTC m=+0.032660834 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:03:17 np0005596060 hardcore_kepler[249900]: 167 167
Jan 26 13:03:17 np0005596060 systemd[1]: libpod-c8619e6ebb077d292f6122bcd4f6ab0f2401cc54b72311c3c3c90d18133d20e2.scope: Deactivated successfully.
Jan 26 13:03:17 np0005596060 podman[249884]: 2026-01-26 18:03:17.663869078 +0000 UTC m=+0.132527299 container attach c8619e6ebb077d292f6122bcd4f6ab0f2401cc54b72311c3c3c90d18133d20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kepler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 13:03:17 np0005596060 podman[249884]: 2026-01-26 18:03:17.664255917 +0000 UTC m=+0.132914138 container died c8619e6ebb077d292f6122bcd4f6ab0f2401cc54b72311c3c3c90d18133d20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kepler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:03:17 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b1f8662ce17cdae09567131694be05c365f6239edd7a7296ea8257d6ceddd864-merged.mount: Deactivated successfully.
Jan 26 13:03:17 np0005596060 podman[249884]: 2026-01-26 18:03:17.707527914 +0000 UTC m=+0.176186095 container remove c8619e6ebb077d292f6122bcd4f6ab0f2401cc54b72311c3c3c90d18133d20e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_kepler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 26 13:03:17 np0005596060 systemd[1]: libpod-conmon-c8619e6ebb077d292f6122bcd4f6ab0f2401cc54b72311c3c3c90d18133d20e2.scope: Deactivated successfully.
Jan 26 13:03:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 0 B/s wr, 97 op/s
Jan 26 13:03:17 np0005596060 podman[249924]: 2026-01-26 18:03:17.888493506 +0000 UTC m=+0.038598751 container create 42d279d12948389366d5f67e136ba4da785dcf3658124b28683ce27a5976b3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:03:17 np0005596060 systemd[1]: Started libpod-conmon-42d279d12948389366d5f67e136ba4da785dcf3658124b28683ce27a5976b3be.scope.
Jan 26 13:03:17 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:03:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379c1cae74cca764e2082379f33f55aa4e03c67928c73cf66c3de3614a1d8aa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379c1cae74cca764e2082379f33f55aa4e03c67928c73cf66c3de3614a1d8aa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379c1cae74cca764e2082379f33f55aa4e03c67928c73cf66c3de3614a1d8aa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/379c1cae74cca764e2082379f33f55aa4e03c67928c73cf66c3de3614a1d8aa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:03:17 np0005596060 podman[249924]: 2026-01-26 18:03:17.966159748 +0000 UTC m=+0.116265023 container init 42d279d12948389366d5f67e136ba4da785dcf3658124b28683ce27a5976b3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:03:17 np0005596060 podman[249924]: 2026-01-26 18:03:17.872848847 +0000 UTC m=+0.022954112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:03:17 np0005596060 podman[249924]: 2026-01-26 18:03:17.974578578 +0000 UTC m=+0.124683823 container start 42d279d12948389366d5f67e136ba4da785dcf3658124b28683ce27a5976b3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 13:03:17 np0005596060 podman[249924]: 2026-01-26 18:03:17.97911143 +0000 UTC m=+0.129216675 container attach 42d279d12948389366d5f67e136ba4da785dcf3658124b28683ce27a5976b3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 13:03:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:18 np0005596060 vibrant_lalande[249941]: {
Jan 26 13:03:18 np0005596060 vibrant_lalande[249941]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:03:18 np0005596060 vibrant_lalande[249941]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:03:18 np0005596060 vibrant_lalande[249941]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:03:18 np0005596060 vibrant_lalande[249941]:        "osd_id": 1,
Jan 26 13:03:18 np0005596060 vibrant_lalande[249941]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:03:18 np0005596060 vibrant_lalande[249941]:        "type": "bluestore"
Jan 26 13:03:18 np0005596060 vibrant_lalande[249941]:    }
Jan 26 13:03:18 np0005596060 vibrant_lalande[249941]: }
Jan 26 13:03:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:18.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:18 np0005596060 systemd[1]: libpod-42d279d12948389366d5f67e136ba4da785dcf3658124b28683ce27a5976b3be.scope: Deactivated successfully.
Jan 26 13:03:18 np0005596060 conmon[249941]: conmon 42d279d12948389366d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42d279d12948389366d5f67e136ba4da785dcf3658124b28683ce27a5976b3be.scope/container/memory.events
Jan 26 13:03:18 np0005596060 podman[249924]: 2026-01-26 18:03:18.862162479 +0000 UTC m=+1.012267724 container died 42d279d12948389366d5f67e136ba4da785dcf3658124b28683ce27a5976b3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 13:03:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-379c1cae74cca764e2082379f33f55aa4e03c67928c73cf66c3de3614a1d8aa3-merged.mount: Deactivated successfully.
Jan 26 13:03:18 np0005596060 podman[249924]: 2026-01-26 18:03:18.91644972 +0000 UTC m=+1.066554965 container remove 42d279d12948389366d5f67e136ba4da785dcf3658124b28683ce27a5976b3be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_lalande, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:03:18 np0005596060 systemd[1]: libpod-conmon-42d279d12948389366d5f67e136ba4da785dcf3658124b28683ce27a5976b3be.scope: Deactivated successfully.
Jan 26 13:03:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:03:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:03:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:03:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:03:19 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 59df5051-23e1-4948-9d1f-85cf5b4d96c2 does not exist
Jan 26 13:03:19 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 76983b2c-e7e4-4fa9-8948-a1017bbd6168 does not exist
Jan 26 13:03:19 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 254c0253-89da-4052-a412-c69d89686d8d does not exist
Jan 26 13:03:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:19.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 26 13:03:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:03:20 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:03:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:20.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 0 B/s wr, 7 op/s
Jan 26 13:03:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:22.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:24.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:25.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:26.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:27.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:28 np0005596060 podman[250078]: 2026-01-26 18:03:28.793698001 +0000 UTC m=+0.053901452 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:03:28 np0005596060 podman[250079]: 2026-01-26 18:03:28.821500483 +0000 UTC m=+0.081703874 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 13:03:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:28.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:29.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:30.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:31.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:32.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:33.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:34.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:35.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:36.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:37.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:38.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:39.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:40.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:41.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:42.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:43.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:03:44
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'images', 'vms']
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:03:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:03:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:44.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:45.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:46.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:47.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:48.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:49.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.492 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.492 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.510 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.510 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.510 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.534 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.534 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.535 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.536 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.536 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.536 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.576 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.577 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.577 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.577 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:03:50 np0005596060 nova_compute[247421]: 2026-01-26 18:03:50.578 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:03:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:50.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:03:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2775215346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.024 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.176 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.177 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5219MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.177 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.178 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.313 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.313 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.332 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:03:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:51.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:03:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2037143521' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.802 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.807 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:03:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.831 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.833 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.834 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.949 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.950 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:03:51 np0005596060 nova_compute[247421]: 2026-01-26 18:03:51.950 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:03:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:03:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:52.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:03:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:53.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:54.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:55.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:56.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:57.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:03:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:03:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:03:58.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:03:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:03:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:03:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:03:59.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:03:59 np0005596060 podman[250233]: 2026-01-26 18:03:59.802109188 +0000 UTC m=+0.066424464 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:03:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:03:59 np0005596060 podman[250234]: 2026-01-26 18:03:59.829080959 +0000 UTC m=+0.090543454 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:04:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:00.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:01.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:04:01.477 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:04:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:04:01.479 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:04:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:04:01.481 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:04:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:02.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:03.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:04:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:04.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:05.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:06.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:04:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:07.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:04:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:08.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:09.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:10.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:11.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:12.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:13.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:04:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:04:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:04:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:04:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:04:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:04:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:04:14.737 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:04:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:04:14.738 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:04:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:04:14.738 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:04:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:14.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:15.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:16.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:17.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:18.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:19.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:20.678261) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450660678446, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 2116, "num_deletes": 251, "total_data_size": 3979862, "memory_usage": 4034776, "flush_reason": "Manual Compaction"}
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 26 13:04:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:20.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450660982553, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 3872489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17366, "largest_seqno": 19481, "table_properties": {"data_size": 3862913, "index_size": 6070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19732, "raw_average_key_size": 20, "raw_value_size": 3843603, "raw_average_value_size": 3966, "num_data_blocks": 270, "num_entries": 969, "num_filter_entries": 969, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769450422, "oldest_key_time": 1769450422, "file_creation_time": 1769450660, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 304340 microseconds, and 16838 cpu microseconds.
Jan 26 13:04:20 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:04:21 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 29d503ba-5456-4614-8997-776aec265474 does not exist
Jan 26 13:04:21 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev fa6c64cb-07b0-4b68-bd33-839851e860f8 does not exist
Jan 26 13:04:21 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9e044226-9d9a-4849-8302-c503153e61b5 does not exist
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:20.982621) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 3872489 bytes OK
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:20.982652) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.109358) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.109431) EVENT_LOG_v1 {"time_micros": 1769450661109415, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.109462) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3971231, prev total WAL file size 3971692, number of live WAL files 2.
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.111025) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(3781KB)], [41(7958KB)]
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450661111215, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 12021701, "oldest_snapshot_seqno": -1}
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4538 keys, 9945367 bytes, temperature: kUnknown
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450661351713, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 9945367, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9912086, "index_size": 20819, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 113576, "raw_average_key_size": 25, "raw_value_size": 9826979, "raw_average_value_size": 2165, "num_data_blocks": 863, "num_entries": 4538, "num_filter_entries": 4538, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769450661, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:04:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:21.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.352070) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9945367 bytes
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.512491) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 50.0 rd, 41.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.8 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 5065, records dropped: 527 output_compression: NoCompression
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.512577) EVENT_LOG_v1 {"time_micros": 1769450661512524, "job": 20, "event": "compaction_finished", "compaction_time_micros": 240617, "compaction_time_cpu_micros": 47769, "output_level": 6, "num_output_files": 1, "total_output_size": 9945367, "num_input_records": 5065, "num_output_records": 4538, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450661513533, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450661514949, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.110854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.515005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.515010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.515011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.515013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:04:21 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:04:21.515014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:04:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:21 np0005596060 podman[250612]: 2026-01-26 18:04:21.750136163 +0000 UTC m=+0.022338370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:04:21 np0005596060 podman[250612]: 2026-01-26 18:04:21.95806898 +0000 UTC m=+0.230271167 container create 66c8b78c053f05ad23d31967f9411f83a85f7970fdaaa4b775e3884419341d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 13:04:22 np0005596060 systemd[1]: Started libpod-conmon-66c8b78c053f05ad23d31967f9411f83a85f7970fdaaa4b775e3884419341d16.scope.
Jan 26 13:04:22 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:04:22 np0005596060 podman[250612]: 2026-01-26 18:04:22.058166177 +0000 UTC m=+0.330368384 container init 66c8b78c053f05ad23d31967f9411f83a85f7970fdaaa4b775e3884419341d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:04:22 np0005596060 podman[250612]: 2026-01-26 18:04:22.065446999 +0000 UTC m=+0.337649186 container start 66c8b78c053f05ad23d31967f9411f83a85f7970fdaaa4b775e3884419341d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:04:22 np0005596060 laughing_engelbart[250629]: 167 167
Jan 26 13:04:22 np0005596060 systemd[1]: libpod-66c8b78c053f05ad23d31967f9411f83a85f7970fdaaa4b775e3884419341d16.scope: Deactivated successfully.
Jan 26 13:04:22 np0005596060 conmon[250629]: conmon 66c8b78c053f05ad23d3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66c8b78c053f05ad23d31967f9411f83a85f7970fdaaa4b775e3884419341d16.scope/container/memory.events
Jan 26 13:04:22 np0005596060 podman[250612]: 2026-01-26 18:04:22.090717022 +0000 UTC m=+0.362919209 container attach 66c8b78c053f05ad23d31967f9411f83a85f7970fdaaa4b775e3884419341d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 26 13:04:22 np0005596060 podman[250612]: 2026-01-26 18:04:22.092596419 +0000 UTC m=+0.364798606 container died 66c8b78c053f05ad23d31967f9411f83a85f7970fdaaa4b775e3884419341d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:04:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-221c4b85057c6f442100dafb0403e7640a42ffa3c78be364f619015f8aee670a-merged.mount: Deactivated successfully.
Jan 26 13:04:22 np0005596060 podman[250612]: 2026-01-26 18:04:22.399043413 +0000 UTC m=+0.671245610 container remove 66c8b78c053f05ad23d31967f9411f83a85f7970fdaaa4b775e3884419341d16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:04:22 np0005596060 systemd[1]: libpod-conmon-66c8b78c053f05ad23d31967f9411f83a85f7970fdaaa4b775e3884419341d16.scope: Deactivated successfully.
Jan 26 13:04:22 np0005596060 podman[250654]: 2026-01-26 18:04:22.590718953 +0000 UTC m=+0.059426810 container create 23f2ce1ad1b2415f4ca30edd76a432f0977a810d37fa400f1cbe307cb093b885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:04:22 np0005596060 systemd[1]: Started libpod-conmon-23f2ce1ad1b2415f4ca30edd76a432f0977a810d37fa400f1cbe307cb093b885.scope.
Jan 26 13:04:22 np0005596060 podman[250654]: 2026-01-26 18:04:22.561039549 +0000 UTC m=+0.029747426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:04:22 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:04:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311dd5bfe4c884ebfe95b78bd7fcf4bb052f77b1c556df4c7a71478dcd66e633/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311dd5bfe4c884ebfe95b78bd7fcf4bb052f77b1c556df4c7a71478dcd66e633/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311dd5bfe4c884ebfe95b78bd7fcf4bb052f77b1c556df4c7a71478dcd66e633/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311dd5bfe4c884ebfe95b78bd7fcf4bb052f77b1c556df4c7a71478dcd66e633/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/311dd5bfe4c884ebfe95b78bd7fcf4bb052f77b1c556df4c7a71478dcd66e633/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:22 np0005596060 podman[250654]: 2026-01-26 18:04:22.686621754 +0000 UTC m=+0.155329661 container init 23f2ce1ad1b2415f4ca30edd76a432f0977a810d37fa400f1cbe307cb093b885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_liskov, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:04:22 np0005596060 podman[250654]: 2026-01-26 18:04:22.694014969 +0000 UTC m=+0.162722826 container start 23f2ce1ad1b2415f4ca30edd76a432f0977a810d37fa400f1cbe307cb093b885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_liskov, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:04:22 np0005596060 podman[250654]: 2026-01-26 18:04:22.697967488 +0000 UTC m=+0.166675365 container attach 23f2ce1ad1b2415f4ca30edd76a432f0977a810d37fa400f1cbe307cb093b885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 13:04:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:22.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:23.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:23 np0005596060 elastic_liskov[250670]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:04:23 np0005596060 elastic_liskov[250670]: --> relative data size: 1.0
Jan 26 13:04:23 np0005596060 elastic_liskov[250670]: --> All data devices are unavailable
Jan 26 13:04:23 np0005596060 systemd[1]: libpod-23f2ce1ad1b2415f4ca30edd76a432f0977a810d37fa400f1cbe307cb093b885.scope: Deactivated successfully.
Jan 26 13:04:23 np0005596060 podman[250654]: 2026-01-26 18:04:23.510038664 +0000 UTC m=+0.978746521 container died 23f2ce1ad1b2415f4ca30edd76a432f0977a810d37fa400f1cbe307cb093b885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_liskov, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:04:23 np0005596060 systemd[1]: var-lib-containers-storage-overlay-311dd5bfe4c884ebfe95b78bd7fcf4bb052f77b1c556df4c7a71478dcd66e633-merged.mount: Deactivated successfully.
Jan 26 13:04:23 np0005596060 podman[250654]: 2026-01-26 18:04:23.571716047 +0000 UTC m=+1.040423904 container remove 23f2ce1ad1b2415f4ca30edd76a432f0977a810d37fa400f1cbe307cb093b885 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 13:04:23 np0005596060 systemd[1]: libpod-conmon-23f2ce1ad1b2415f4ca30edd76a432f0977a810d37fa400f1cbe307cb093b885.scope: Deactivated successfully.
Jan 26 13:04:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:24 np0005596060 podman[250837]: 2026-01-26 18:04:24.16884346 +0000 UTC m=+0.037942171 container create f91f57585dc8c00993cf3ab7ac3daf2c0fcedd55d3b692bef41bde93bddf78d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_davinci, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:04:24 np0005596060 systemd[1]: Started libpod-conmon-f91f57585dc8c00993cf3ab7ac3daf2c0fcedd55d3b692bef41bde93bddf78d9.scope.
Jan 26 13:04:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:04:24 np0005596060 podman[250837]: 2026-01-26 18:04:24.153537497 +0000 UTC m=+0.022636218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:04:24 np0005596060 podman[250837]: 2026-01-26 18:04:24.264346682 +0000 UTC m=+0.133445393 container init f91f57585dc8c00993cf3ab7ac3daf2c0fcedd55d3b692bef41bde93bddf78d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_davinci, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:04:24 np0005596060 podman[250837]: 2026-01-26 18:04:24.273563352 +0000 UTC m=+0.142662043 container start f91f57585dc8c00993cf3ab7ac3daf2c0fcedd55d3b692bef41bde93bddf78d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_davinci, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:04:24 np0005596060 eloquent_davinci[250854]: 167 167
Jan 26 13:04:24 np0005596060 systemd[1]: libpod-f91f57585dc8c00993cf3ab7ac3daf2c0fcedd55d3b692bef41bde93bddf78d9.scope: Deactivated successfully.
Jan 26 13:04:24 np0005596060 podman[250837]: 2026-01-26 18:04:24.277441589 +0000 UTC m=+0.146540300 container attach f91f57585dc8c00993cf3ab7ac3daf2c0fcedd55d3b692bef41bde93bddf78d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:04:24 np0005596060 podman[250837]: 2026-01-26 18:04:24.278518916 +0000 UTC m=+0.147617607 container died f91f57585dc8c00993cf3ab7ac3daf2c0fcedd55d3b692bef41bde93bddf78d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 13:04:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7b82d708e2a0a06c560d7e012c7f0774148fb3a6d2cb05b415ad32b9b52a6e29-merged.mount: Deactivated successfully.
Jan 26 13:04:24 np0005596060 podman[250837]: 2026-01-26 18:04:24.320874977 +0000 UTC m=+0.189973668 container remove f91f57585dc8c00993cf3ab7ac3daf2c0fcedd55d3b692bef41bde93bddf78d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_davinci, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:04:24 np0005596060 systemd[1]: libpod-conmon-f91f57585dc8c00993cf3ab7ac3daf2c0fcedd55d3b692bef41bde93bddf78d9.scope: Deactivated successfully.
Jan 26 13:04:24 np0005596060 podman[250879]: 2026-01-26 18:04:24.489482439 +0000 UTC m=+0.043344236 container create 0307814d20895fafaa736dd5410477d536ccf41ff803450cdbac497aaec1f94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noether, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 13:04:24 np0005596060 systemd[1]: Started libpod-conmon-0307814d20895fafaa736dd5410477d536ccf41ff803450cdbac497aaec1f94c.scope.
Jan 26 13:04:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:04:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb2565e1ad39b0c767e1e1788e6121a8478211d02f2a6a9d21a59c470efa55c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb2565e1ad39b0c767e1e1788e6121a8478211d02f2a6a9d21a59c470efa55c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb2565e1ad39b0c767e1e1788e6121a8478211d02f2a6a9d21a59c470efa55c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cb2565e1ad39b0c767e1e1788e6121a8478211d02f2a6a9d21a59c470efa55c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:24 np0005596060 podman[250879]: 2026-01-26 18:04:24.569106053 +0000 UTC m=+0.122967870 container init 0307814d20895fafaa736dd5410477d536ccf41ff803450cdbac497aaec1f94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noether, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 13:04:24 np0005596060 podman[250879]: 2026-01-26 18:04:24.47193034 +0000 UTC m=+0.025792157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:04:24 np0005596060 podman[250879]: 2026-01-26 18:04:24.575454902 +0000 UTC m=+0.129316699 container start 0307814d20895fafaa736dd5410477d536ccf41ff803450cdbac497aaec1f94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 13:04:24 np0005596060 podman[250879]: 2026-01-26 18:04:24.578342505 +0000 UTC m=+0.132204302 container attach 0307814d20895fafaa736dd5410477d536ccf41ff803450cdbac497aaec1f94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noether, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:04:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:24.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:25 np0005596060 bold_noether[250919]: {
Jan 26 13:04:25 np0005596060 bold_noether[250919]:    "1": [
Jan 26 13:04:25 np0005596060 bold_noether[250919]:        {
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "devices": [
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "/dev/loop3"
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            ],
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "lv_name": "ceph_lv0",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "lv_size": "7511998464",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "name": "ceph_lv0",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "tags": {
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.cluster_name": "ceph",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.crush_device_class": "",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.encrypted": "0",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.osd_id": "1",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.type": "block",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:                "ceph.vdo": "0"
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            },
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "type": "block",
Jan 26 13:04:25 np0005596060 bold_noether[250919]:            "vg_name": "ceph_vg0"
Jan 26 13:04:25 np0005596060 bold_noether[250919]:        }
Jan 26 13:04:25 np0005596060 bold_noether[250919]:    ]
Jan 26 13:04:25 np0005596060 bold_noether[250919]: }
Jan 26 13:04:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:25.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:25 np0005596060 systemd[1]: libpod-0307814d20895fafaa736dd5410477d536ccf41ff803450cdbac497aaec1f94c.scope: Deactivated successfully.
Jan 26 13:04:25 np0005596060 podman[250879]: 2026-01-26 18:04:25.447488799 +0000 UTC m=+1.001350596 container died 0307814d20895fafaa736dd5410477d536ccf41ff803450cdbac497aaec1f94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noether, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 13:04:25 np0005596060 systemd[1]: var-lib-containers-storage-overlay-5cb2565e1ad39b0c767e1e1788e6121a8478211d02f2a6a9d21a59c470efa55c-merged.mount: Deactivated successfully.
Jan 26 13:04:25 np0005596060 podman[250879]: 2026-01-26 18:04:25.517941193 +0000 UTC m=+1.071802990 container remove 0307814d20895fafaa736dd5410477d536ccf41ff803450cdbac497aaec1f94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_noether, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:04:25 np0005596060 systemd[1]: libpod-conmon-0307814d20895fafaa736dd5410477d536ccf41ff803450cdbac497aaec1f94c.scope: Deactivated successfully.
Jan 26 13:04:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:26 np0005596060 podman[251108]: 2026-01-26 18:04:26.158602826 +0000 UTC m=+0.039228363 container create 046739472b825a72f3e6a1174c0416c596c2819dcb8f44f0e80993d2678e7a95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 13:04:26 np0005596060 systemd[1]: Started libpod-conmon-046739472b825a72f3e6a1174c0416c596c2819dcb8f44f0e80993d2678e7a95.scope.
Jan 26 13:04:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:04:26 np0005596060 podman[251108]: 2026-01-26 18:04:26.14157014 +0000 UTC m=+0.022195697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:04:26 np0005596060 podman[251108]: 2026-01-26 18:04:26.242884957 +0000 UTC m=+0.123510584 container init 046739472b825a72f3e6a1174c0416c596c2819dcb8f44f0e80993d2678e7a95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:04:26 np0005596060 podman[251108]: 2026-01-26 18:04:26.252159329 +0000 UTC m=+0.132784866 container start 046739472b825a72f3e6a1174c0416c596c2819dcb8f44f0e80993d2678e7a95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:04:26 np0005596060 podman[251108]: 2026-01-26 18:04:26.256155809 +0000 UTC m=+0.136781366 container attach 046739472b825a72f3e6a1174c0416c596c2819dcb8f44f0e80993d2678e7a95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:04:26 np0005596060 agitated_tesla[251125]: 167 167
Jan 26 13:04:26 np0005596060 systemd[1]: libpod-046739472b825a72f3e6a1174c0416c596c2819dcb8f44f0e80993d2678e7a95.scope: Deactivated successfully.
Jan 26 13:04:26 np0005596060 podman[251108]: 2026-01-26 18:04:26.258777055 +0000 UTC m=+0.139402612 container died 046739472b825a72f3e6a1174c0416c596c2819dcb8f44f0e80993d2678e7a95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:04:26 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f55ceeb682905341e3ada0160971082c7dbd5e47ca363bd8ac208156fe0b51c4-merged.mount: Deactivated successfully.
Jan 26 13:04:26 np0005596060 podman[251108]: 2026-01-26 18:04:26.296546471 +0000 UTC m=+0.177172008 container remove 046739472b825a72f3e6a1174c0416c596c2819dcb8f44f0e80993d2678e7a95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_tesla, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 13:04:26 np0005596060 systemd[1]: libpod-conmon-046739472b825a72f3e6a1174c0416c596c2819dcb8f44f0e80993d2678e7a95.scope: Deactivated successfully.
Jan 26 13:04:26 np0005596060 podman[251148]: 2026-01-26 18:04:26.444611599 +0000 UTC m=+0.040115776 container create 44da85c5ca51738f5ff0d3c45fd142f108e2173db5cc2cab6b54262ba717e511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dewdney, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 13:04:26 np0005596060 systemd[1]: Started libpod-conmon-44da85c5ca51738f5ff0d3c45fd142f108e2173db5cc2cab6b54262ba717e511.scope.
Jan 26 13:04:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:04:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f50b0b290104367863cfde284d94b2058eeb171c78b1a9a7864c5187984a88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:26 np0005596060 podman[251148]: 2026-01-26 18:04:26.427124541 +0000 UTC m=+0.022628738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:04:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f50b0b290104367863cfde284d94b2058eeb171c78b1a9a7864c5187984a88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f50b0b290104367863cfde284d94b2058eeb171c78b1a9a7864c5187984a88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3f50b0b290104367863cfde284d94b2058eeb171c78b1a9a7864c5187984a88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:04:26 np0005596060 podman[251148]: 2026-01-26 18:04:26.534486749 +0000 UTC m=+0.129990956 container init 44da85c5ca51738f5ff0d3c45fd142f108e2173db5cc2cab6b54262ba717e511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:04:26 np0005596060 podman[251148]: 2026-01-26 18:04:26.541615778 +0000 UTC m=+0.137119955 container start 44da85c5ca51738f5ff0d3c45fd142f108e2173db5cc2cab6b54262ba717e511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dewdney, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:04:26 np0005596060 podman[251148]: 2026-01-26 18:04:26.545074294 +0000 UTC m=+0.140578491 container attach 44da85c5ca51738f5ff0d3c45fd142f108e2173db5cc2cab6b54262ba717e511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dewdney, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 13:04:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:26.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:27 np0005596060 exciting_dewdney[251164]: {
Jan 26 13:04:27 np0005596060 exciting_dewdney[251164]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:04:27 np0005596060 exciting_dewdney[251164]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:04:27 np0005596060 exciting_dewdney[251164]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:04:27 np0005596060 exciting_dewdney[251164]:        "osd_id": 1,
Jan 26 13:04:27 np0005596060 exciting_dewdney[251164]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:04:27 np0005596060 exciting_dewdney[251164]:        "type": "bluestore"
Jan 26 13:04:27 np0005596060 exciting_dewdney[251164]:    }
Jan 26 13:04:27 np0005596060 exciting_dewdney[251164]: }
Jan 26 13:04:27 np0005596060 systemd[1]: libpod-44da85c5ca51738f5ff0d3c45fd142f108e2173db5cc2cab6b54262ba717e511.scope: Deactivated successfully.
Jan 26 13:04:27 np0005596060 podman[251148]: 2026-01-26 18:04:27.410615928 +0000 UTC m=+1.006120095 container died 44da85c5ca51738f5ff0d3c45fd142f108e2173db5cc2cab6b54262ba717e511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:04:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:27.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e3f50b0b290104367863cfde284d94b2058eeb171c78b1a9a7864c5187984a88-merged.mount: Deactivated successfully.
Jan 26 13:04:27 np0005596060 podman[251148]: 2026-01-26 18:04:27.467250446 +0000 UTC m=+1.062754633 container remove 44da85c5ca51738f5ff0d3c45fd142f108e2173db5cc2cab6b54262ba717e511 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:04:27 np0005596060 systemd[1]: libpod-conmon-44da85c5ca51738f5ff0d3c45fd142f108e2173db5cc2cab6b54262ba717e511.scope: Deactivated successfully.
Jan 26 13:04:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:04:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:04:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:04:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:04:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 078b540c-8ed9-4ef0-aedf-67ec4f3cb03b does not exist
Jan 26 13:04:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 202f733a-ab3a-4b4c-9312-8088eca4fbf3 does not exist
Jan 26 13:04:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 83ea967d-5905-43a2-93cd-553466f7edb0 does not exist
Jan 26 13:04:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:28.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:29.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:04:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:04:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:30 np0005596060 podman[251247]: 2026-01-26 18:04:30.809325996 +0000 UTC m=+0.069789829 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 13:04:30 np0005596060 podman[251248]: 2026-01-26 18:04:30.847977924 +0000 UTC m=+0.107569235 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 26 13:04:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:30.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:31.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:32.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:33.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:34.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:35.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:36.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:37.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:38.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:39.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:40.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:42.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:04:44
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', '.rgw.root', '.mgr', 'vms', 'backups', 'default.rgw.meta']
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:04:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:04:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:44.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:46.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:47.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:49 np0005596060 nova_compute[247421]: 2026-01-26 18:04:49.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:04:49 np0005596060 nova_compute[247421]: 2026-01-26 18:04:49.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:04:49 np0005596060 nova_compute[247421]: 2026-01-26 18:04:49.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:04:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:49.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:49.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:50 np0005596060 nova_compute[247421]: 2026-01-26 18:04:50.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:04:50 np0005596060 nova_compute[247421]: 2026-01-26 18:04:50.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:04:50 np0005596060 nova_compute[247421]: 2026-01-26 18:04:50.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:04:50 np0005596060 nova_compute[247421]: 2026-01-26 18:04:50.676 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:04:50 np0005596060 nova_compute[247421]: 2026-01-26 18:04:50.677 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:04:50 np0005596060 nova_compute[247421]: 2026-01-26 18:04:50.709 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:04:50 np0005596060 nova_compute[247421]: 2026-01-26 18:04:50.710 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:04:50 np0005596060 nova_compute[247421]: 2026-01-26 18:04:50.710 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:04:50 np0005596060 nova_compute[247421]: 2026-01-26 18:04:50.710 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:04:50 np0005596060 nova_compute[247421]: 2026-01-26 18:04:50.711 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:04:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:04:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578271630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:04:51 np0005596060 nova_compute[247421]: 2026-01-26 18:04:51.134 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:04:51 np0005596060 nova_compute[247421]: 2026-01-26 18:04:51.326 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:04:51 np0005596060 nova_compute[247421]: 2026-01-26 18:04:51.327 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5210MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:04:51 np0005596060 nova_compute[247421]: 2026-01-26 18:04:51.327 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:04:51 np0005596060 nova_compute[247421]: 2026-01-26 18:04:51.328 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:04:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:51.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:51 np0005596060 nova_compute[247421]: 2026-01-26 18:04:51.614 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:04:51 np0005596060 nova_compute[247421]: 2026-01-26 18:04:51.614 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:04:51 np0005596060 nova_compute[247421]: 2026-01-26 18:04:51.633 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:04:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:51.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:04:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3092892930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:04:52 np0005596060 nova_compute[247421]: 2026-01-26 18:04:52.108 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:04:52 np0005596060 nova_compute[247421]: 2026-01-26 18:04:52.113 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:04:52 np0005596060 nova_compute[247421]: 2026-01-26 18:04:52.138 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:04:52 np0005596060 nova_compute[247421]: 2026-01-26 18:04:52.139 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:04:52 np0005596060 nova_compute[247421]: 2026-01-26 18:04:52.140 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:04:53 np0005596060 nova_compute[247421]: 2026-01-26 18:04:53.113 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:04:53 np0005596060 nova_compute[247421]: 2026-01-26 18:04:53.114 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:04:53 np0005596060 nova_compute[247421]: 2026-01-26 18:04:53.114 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:04:53 np0005596060 nova_compute[247421]: 2026-01-26 18:04:53.114 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:04:53 np0005596060 nova_compute[247421]: 2026-01-26 18:04:53.114 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:04:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:53.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:53.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:55.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:55.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:57.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:57.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:04:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:04:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:04:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:04:59.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:04:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:04:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:04:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:04:59.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:04:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:01.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:01.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:01 np0005596060 podman[251401]: 2026-01-26 18:05:01.795142987 +0000 UTC m=+0.054436354 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:05:01 np0005596060 podman[251402]: 2026-01-26 18:05:01.829091537 +0000 UTC m=+0.086895737 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:05:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:03.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:05:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:03.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:05.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:05.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:07.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:07.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:09.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:09.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:11.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:05:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:11.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:05:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:13.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:13.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:05:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:05:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:05:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:05:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:05:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:05:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:05:14.738 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:05:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:05:14.738 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:05:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:05:14.738 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:05:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:15.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:15.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:17.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:05:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:17.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:05:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:19.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:19.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=404 latency=0.003000075s ======
Jan 26 13:05:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:20.118 +0000] "GET /info HTTP/1.1" 404 150 - "python-urllib3/1.26.5" - latency=0.003000075s
Jan 26 13:05:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - - [26/Jan/2026:18:05:20.135 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.000000000s
Jan 26 13:05:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:21.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:21.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:23.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 26 13:05:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 26 13:05:24 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 26 13:05:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:25.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 26 13:05:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 26 13:05:25 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 26 13:05:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:05:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:25.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:05:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:27.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:05:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:27.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:05:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 13 MiB data, 165 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Jan 26 13:05:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 26 13:05:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 26 13:05:28 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 26 13:05:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:05:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:29.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:05:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:29.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 13 MiB data, 165 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.1 MiB/s wr, 20 op/s
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:05:30 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6b709943-8d27-4d61-ba35-9eae3133f728 does not exist
Jan 26 13:05:30 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 57cd11d4-52ba-4c29-b798-bf33f5acac43 does not exist
Jan 26 13:05:30 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7b09bc32-f1fc-43a0-a81f-c447d17bc86c does not exist
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:05:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:05:31 np0005596060 podman[251833]: 2026-01-26 18:05:31.108365372 +0000 UTC m=+0.029478699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:05:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:05:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:05:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:05:31 np0005596060 podman[251833]: 2026-01-26 18:05:31.402943739 +0000 UTC m=+0.324057046 container create 5db9271159541460049138a3ef169c72822ddeaa8bae8560f03e7dd55a06e0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 13:05:31 np0005596060 systemd[1]: Started libpod-conmon-5db9271159541460049138a3ef169c72822ddeaa8bae8560f03e7dd55a06e0cd.scope.
Jan 26 13:05:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:31.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:05:31 np0005596060 podman[251833]: 2026-01-26 18:05:31.660163819 +0000 UTC m=+0.581277226 container init 5db9271159541460049138a3ef169c72822ddeaa8bae8560f03e7dd55a06e0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:05:31 np0005596060 podman[251833]: 2026-01-26 18:05:31.672117099 +0000 UTC m=+0.593230436 container start 5db9271159541460049138a3ef169c72822ddeaa8bae8560f03e7dd55a06e0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:05:31 np0005596060 podman[251833]: 2026-01-26 18:05:31.678126159 +0000 UTC m=+0.599239496 container attach 5db9271159541460049138a3ef169c72822ddeaa8bae8560f03e7dd55a06e0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:05:31 np0005596060 busy_dijkstra[251849]: 167 167
Jan 26 13:05:31 np0005596060 systemd[1]: libpod-5db9271159541460049138a3ef169c72822ddeaa8bae8560f03e7dd55a06e0cd.scope: Deactivated successfully.
Jan 26 13:05:31 np0005596060 podman[251833]: 2026-01-26 18:05:31.680645652 +0000 UTC m=+0.601758949 container died 5db9271159541460049138a3ef169c72822ddeaa8bae8560f03e7dd55a06e0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:05:31 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7649a303cd6b4124dc2e4aaf8b2fa299eba701b5c28a5281a3ed84db7582f814-merged.mount: Deactivated successfully.
Jan 26 13:05:31 np0005596060 podman[251833]: 2026-01-26 18:05:31.792534194 +0000 UTC m=+0.713647491 container remove 5db9271159541460049138a3ef169c72822ddeaa8bae8560f03e7dd55a06e0cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_dijkstra, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 13:05:31 np0005596060 systemd[1]: libpod-conmon-5db9271159541460049138a3ef169c72822ddeaa8bae8560f03e7dd55a06e0cd.scope: Deactivated successfully.
Jan 26 13:05:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:31.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 21 MiB data, 173 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 2.7 MiB/s wr, 33 op/s
Jan 26 13:05:31 np0005596060 podman[251870]: 2026-01-26 18:05:31.945291019 +0000 UTC m=+0.095964894 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 13:05:31 np0005596060 podman[251894]: 2026-01-26 18:05:31.965536496 +0000 UTC m=+0.044039144 container create 8c6ba7cf14db729de537d359b58ef9296da72470a2c5b1cd4a05d1079e8d4d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 13:05:32 np0005596060 systemd[1]: Started libpod-conmon-8c6ba7cf14db729de537d359b58ef9296da72470a2c5b1cd4a05d1079e8d4d75.scope.
Jan 26 13:05:32 np0005596060 podman[251894]: 2026-01-26 18:05:31.948247963 +0000 UTC m=+0.026750631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:05:32 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:05:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937d830f31233b5020aca3b547171244a7ba1e7755915d496800b8e5ce6adb5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937d830f31233b5020aca3b547171244a7ba1e7755915d496800b8e5ce6adb5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937d830f31233b5020aca3b547171244a7ba1e7755915d496800b8e5ce6adb5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937d830f31233b5020aca3b547171244a7ba1e7755915d496800b8e5ce6adb5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937d830f31233b5020aca3b547171244a7ba1e7755915d496800b8e5ce6adb5f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:32 np0005596060 podman[251894]: 2026-01-26 18:05:32.071713525 +0000 UTC m=+0.150216203 container init 8c6ba7cf14db729de537d359b58ef9296da72470a2c5b1cd4a05d1079e8d4d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mahavira, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:05:32 np0005596060 podman[251901]: 2026-01-26 18:05:32.076756471 +0000 UTC m=+0.124880748 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 26 13:05:32 np0005596060 podman[251894]: 2026-01-26 18:05:32.080045764 +0000 UTC m=+0.158548412 container start 8c6ba7cf14db729de537d359b58ef9296da72470a2c5b1cd4a05d1079e8d4d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 13:05:32 np0005596060 podman[251894]: 2026-01-26 18:05:32.084037704 +0000 UTC m=+0.162540352 container attach 8c6ba7cf14db729de537d359b58ef9296da72470a2c5b1cd4a05d1079e8d4d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mahavira, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:05:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 26 13:05:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 26 13:05:32 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 26 13:05:32 np0005596060 trusting_mahavira[251927]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:05:32 np0005596060 trusting_mahavira[251927]: --> relative data size: 1.0
Jan 26 13:05:32 np0005596060 trusting_mahavira[251927]: --> All data devices are unavailable
Jan 26 13:05:32 np0005596060 systemd[1]: libpod-8c6ba7cf14db729de537d359b58ef9296da72470a2c5b1cd4a05d1079e8d4d75.scope: Deactivated successfully.
Jan 26 13:05:32 np0005596060 podman[251894]: 2026-01-26 18:05:32.882614801 +0000 UTC m=+0.961117469 container died 8c6ba7cf14db729de537d359b58ef9296da72470a2c5b1cd4a05d1079e8d4d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mahavira, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:05:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-937d830f31233b5020aca3b547171244a7ba1e7755915d496800b8e5ce6adb5f-merged.mount: Deactivated successfully.
Jan 26 13:05:32 np0005596060 podman[251894]: 2026-01-26 18:05:32.934393248 +0000 UTC m=+1.012895896 container remove 8c6ba7cf14db729de537d359b58ef9296da72470a2c5b1cd4a05d1079e8d4d75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mahavira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:05:32 np0005596060 systemd[1]: libpod-conmon-8c6ba7cf14db729de537d359b58ef9296da72470a2c5b1cd4a05d1079e8d4d75.scope: Deactivated successfully.
Jan 26 13:05:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:33.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 26 13:05:33 np0005596060 podman[252104]: 2026-01-26 18:05:33.63812083 +0000 UTC m=+0.045788148 container create ec3e40fc65ed9cb80b7ac4089d4b0b93f65b70d15ee03670cf741ddffb78c6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:05:33 np0005596060 systemd[1]: Started libpod-conmon-ec3e40fc65ed9cb80b7ac4089d4b0b93f65b70d15ee03670cf741ddffb78c6a2.scope.
Jan 26 13:05:33 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:05:33 np0005596060 podman[252104]: 2026-01-26 18:05:33.619216557 +0000 UTC m=+0.026883895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:05:33 np0005596060 podman[252104]: 2026-01-26 18:05:33.724620886 +0000 UTC m=+0.132288234 container init ec3e40fc65ed9cb80b7ac4089d4b0b93f65b70d15ee03670cf741ddffb78c6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Jan 26 13:05:33 np0005596060 podman[252104]: 2026-01-26 18:05:33.731912919 +0000 UTC m=+0.139580237 container start ec3e40fc65ed9cb80b7ac4089d4b0b93f65b70d15ee03670cf741ddffb78c6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 13:05:33 np0005596060 podman[252104]: 2026-01-26 18:05:33.736238357 +0000 UTC m=+0.143905695 container attach ec3e40fc65ed9cb80b7ac4089d4b0b93f65b70d15ee03670cf741ddffb78c6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 13:05:33 np0005596060 wonderful_neumann[252120]: 167 167
Jan 26 13:05:33 np0005596060 systemd[1]: libpod-ec3e40fc65ed9cb80b7ac4089d4b0b93f65b70d15ee03670cf741ddffb78c6a2.scope: Deactivated successfully.
Jan 26 13:05:33 np0005596060 podman[252104]: 2026-01-26 18:05:33.740778181 +0000 UTC m=+0.148445499 container died ec3e40fc65ed9cb80b7ac4089d4b0b93f65b70d15ee03670cf741ddffb78c6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:05:33 np0005596060 systemd[1]: var-lib-containers-storage-overlay-17dc956c1eb063c6f2780e7057e371f6a263c7221f571fc183143f5c48ae33cd-merged.mount: Deactivated successfully.
Jan 26 13:05:33 np0005596060 podman[252104]: 2026-01-26 18:05:33.782755572 +0000 UTC m=+0.190422920 container remove ec3e40fc65ed9cb80b7ac4089d4b0b93f65b70d15ee03670cf741ddffb78c6a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:05:33 np0005596060 systemd[1]: libpod-conmon-ec3e40fc65ed9cb80b7ac4089d4b0b93f65b70d15ee03670cf741ddffb78c6a2.scope: Deactivated successfully.
Jan 26 13:05:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:05:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:33.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:05:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 5.1 MiB/s wr, 39 op/s
Jan 26 13:05:33 np0005596060 podman[252146]: 2026-01-26 18:05:33.959832406 +0000 UTC m=+0.045132161 container create bb1bf2592d1c835e335ee2b0b6490872870b2f23febde9f978b9ce8e944d30b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:05:34 np0005596060 systemd[1]: Started libpod-conmon-bb1bf2592d1c835e335ee2b0b6490872870b2f23febde9f978b9ce8e944d30b0.scope.
Jan 26 13:05:34 np0005596060 podman[252146]: 2026-01-26 18:05:33.941563529 +0000 UTC m=+0.026863304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:05:34 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:05:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d625939751a177975665c749616dea7ba97bd6cd80f6d6e2c45eae4459ad9d9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d625939751a177975665c749616dea7ba97bd6cd80f6d6e2c45eae4459ad9d9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d625939751a177975665c749616dea7ba97bd6cd80f6d6e2c45eae4459ad9d9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d625939751a177975665c749616dea7ba97bd6cd80f6d6e2c45eae4459ad9d9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:34 np0005596060 podman[252146]: 2026-01-26 18:05:34.072939109 +0000 UTC m=+0.158238914 container init bb1bf2592d1c835e335ee2b0b6490872870b2f23febde9f978b9ce8e944d30b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 13:05:34 np0005596060 podman[252146]: 2026-01-26 18:05:34.086722944 +0000 UTC m=+0.172022699 container start bb1bf2592d1c835e335ee2b0b6490872870b2f23febde9f978b9ce8e944d30b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tu, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:05:34 np0005596060 podman[252146]: 2026-01-26 18:05:34.090325864 +0000 UTC m=+0.175625739 container attach bb1bf2592d1c835e335ee2b0b6490872870b2f23febde9f978b9ce8e944d30b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:05:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 26 13:05:34 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 26 13:05:34 np0005596060 magical_tu[252163]: {
Jan 26 13:05:34 np0005596060 magical_tu[252163]:    "1": [
Jan 26 13:05:34 np0005596060 magical_tu[252163]:        {
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "devices": [
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "/dev/loop3"
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            ],
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "lv_name": "ceph_lv0",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "lv_size": "7511998464",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "name": "ceph_lv0",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "tags": {
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.cluster_name": "ceph",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.crush_device_class": "",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.encrypted": "0",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.osd_id": "1",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.type": "block",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:                "ceph.vdo": "0"
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            },
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "type": "block",
Jan 26 13:05:34 np0005596060 magical_tu[252163]:            "vg_name": "ceph_vg0"
Jan 26 13:05:34 np0005596060 magical_tu[252163]:        }
Jan 26 13:05:34 np0005596060 magical_tu[252163]:    ]
Jan 26 13:05:34 np0005596060 magical_tu[252163]: }
Jan 26 13:05:34 np0005596060 systemd[1]: libpod-bb1bf2592d1c835e335ee2b0b6490872870b2f23febde9f978b9ce8e944d30b0.scope: Deactivated successfully.
Jan 26 13:05:34 np0005596060 podman[252172]: 2026-01-26 18:05:34.878933752 +0000 UTC m=+0.039993723 container died bb1bf2592d1c835e335ee2b0b6490872870b2f23febde9f978b9ce8e944d30b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:05:34 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d625939751a177975665c749616dea7ba97bd6cd80f6d6e2c45eae4459ad9d9a-merged.mount: Deactivated successfully.
Jan 26 13:05:34 np0005596060 podman[252172]: 2026-01-26 18:05:34.930995645 +0000 UTC m=+0.092055616 container remove bb1bf2592d1c835e335ee2b0b6490872870b2f23febde9f978b9ce8e944d30b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tu, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:05:34 np0005596060 systemd[1]: libpod-conmon-bb1bf2592d1c835e335ee2b0b6490872870b2f23febde9f978b9ce8e944d30b0.scope: Deactivated successfully.
Jan 26 13:05:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:35.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:35 np0005596060 podman[252327]: 2026-01-26 18:05:35.536787974 +0000 UTC m=+0.038812173 container create 86a7513365da0cfdf8dd2a1a4f5983dda5ac2caa53425ac4a81274632516a33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:05:35 np0005596060 systemd[1]: Started libpod-conmon-86a7513365da0cfdf8dd2a1a4f5983dda5ac2caa53425ac4a81274632516a33d.scope.
Jan 26 13:05:35 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:05:35 np0005596060 podman[252327]: 2026-01-26 18:05:35.518863045 +0000 UTC m=+0.020887264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:05:35 np0005596060 podman[252327]: 2026-01-26 18:05:35.690979986 +0000 UTC m=+0.193004225 container init 86a7513365da0cfdf8dd2a1a4f5983dda5ac2caa53425ac4a81274632516a33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:05:35 np0005596060 podman[252327]: 2026-01-26 18:05:35.696903184 +0000 UTC m=+0.198927383 container start 86a7513365da0cfdf8dd2a1a4f5983dda5ac2caa53425ac4a81274632516a33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 26 13:05:35 np0005596060 amazing_panini[252343]: 167 167
Jan 26 13:05:35 np0005596060 systemd[1]: libpod-86a7513365da0cfdf8dd2a1a4f5983dda5ac2caa53425ac4a81274632516a33d.scope: Deactivated successfully.
Jan 26 13:05:35 np0005596060 conmon[252343]: conmon 86a7513365da0cfdf8dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86a7513365da0cfdf8dd2a1a4f5983dda5ac2caa53425ac4a81274632516a33d.scope/container/memory.events
Jan 26 13:05:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:35.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 3.8 MiB/s wr, 25 op/s
Jan 26 13:05:35 np0005596060 podman[252327]: 2026-01-26 18:05:35.905314544 +0000 UTC m=+0.407338843 container attach 86a7513365da0cfdf8dd2a1a4f5983dda5ac2caa53425ac4a81274632516a33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:05:35 np0005596060 podman[252327]: 2026-01-26 18:05:35.905924349 +0000 UTC m=+0.407948588 container died 86a7513365da0cfdf8dd2a1a4f5983dda5ac2caa53425ac4a81274632516a33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:05:36 np0005596060 systemd[1]: var-lib-containers-storage-overlay-bf8fa4d471bdded369f3fb3e63c7c7661bba03175add9333d40a64d896b8d7a8-merged.mount: Deactivated successfully.
Jan 26 13:05:37 np0005596060 podman[252327]: 2026-01-26 18:05:37.327339512 +0000 UTC m=+1.829363761 container remove 86a7513365da0cfdf8dd2a1a4f5983dda5ac2caa53425ac4a81274632516a33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:05:37 np0005596060 systemd[1]: libpod-conmon-86a7513365da0cfdf8dd2a1a4f5983dda5ac2caa53425ac4a81274632516a33d.scope: Deactivated successfully.
Jan 26 13:05:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:37.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:37 np0005596060 podman[252369]: 2026-01-26 18:05:37.57444193 +0000 UTC m=+0.060859645 container create 93a7993c3bacef5dd2d211c73969759dc8c943b774bef91686c86def1017fad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:05:37 np0005596060 systemd[1]: Started libpod-conmon-93a7993c3bacef5dd2d211c73969759dc8c943b774bef91686c86def1017fad9.scope.
Jan 26 13:05:37 np0005596060 podman[252369]: 2026-01-26 18:05:37.536786617 +0000 UTC m=+0.023204352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:05:37 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:05:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271f906a1bc0d4991617f0bcc5377b3fd61749ecc6a524b4d7cf2ddc0b28fcfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271f906a1bc0d4991617f0bcc5377b3fd61749ecc6a524b4d7cf2ddc0b28fcfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271f906a1bc0d4991617f0bcc5377b3fd61749ecc6a524b4d7cf2ddc0b28fcfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/271f906a1bc0d4991617f0bcc5377b3fd61749ecc6a524b4d7cf2ddc0b28fcfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:05:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:37.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 3.6 MiB/s wr, 32 op/s
Jan 26 13:05:37 np0005596060 podman[252369]: 2026-01-26 18:05:37.915315227 +0000 UTC m=+0.401733312 container init 93a7993c3bacef5dd2d211c73969759dc8c943b774bef91686c86def1017fad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:05:37 np0005596060 podman[252369]: 2026-01-26 18:05:37.924589639 +0000 UTC m=+0.411007354 container start 93a7993c3bacef5dd2d211c73969759dc8c943b774bef91686c86def1017fad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:05:37 np0005596060 podman[252369]: 2026-01-26 18:05:37.98251584 +0000 UTC m=+0.468933565 container attach 93a7993c3bacef5dd2d211c73969759dc8c943b774bef91686c86def1017fad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:05:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:38 np0005596060 wizardly_herschel[252385]: {
Jan 26 13:05:38 np0005596060 wizardly_herschel[252385]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:05:38 np0005596060 wizardly_herschel[252385]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:05:38 np0005596060 wizardly_herschel[252385]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:05:38 np0005596060 wizardly_herschel[252385]:        "osd_id": 1,
Jan 26 13:05:38 np0005596060 wizardly_herschel[252385]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:05:38 np0005596060 wizardly_herschel[252385]:        "type": "bluestore"
Jan 26 13:05:38 np0005596060 wizardly_herschel[252385]:    }
Jan 26 13:05:38 np0005596060 wizardly_herschel[252385]: }
Jan 26 13:05:38 np0005596060 systemd[1]: libpod-93a7993c3bacef5dd2d211c73969759dc8c943b774bef91686c86def1017fad9.scope: Deactivated successfully.
Jan 26 13:05:38 np0005596060 podman[252407]: 2026-01-26 18:05:38.868071273 +0000 UTC m=+0.029321065 container died 93a7993c3bacef5dd2d211c73969759dc8c943b774bef91686c86def1017fad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:05:38 np0005596060 systemd[1]: var-lib-containers-storage-overlay-271f906a1bc0d4991617f0bcc5377b3fd61749ecc6a524b4d7cf2ddc0b28fcfd-merged.mount: Deactivated successfully.
Jan 26 13:05:38 np0005596060 podman[252407]: 2026-01-26 18:05:38.949511593 +0000 UTC m=+0.110761335 container remove 93a7993c3bacef5dd2d211c73969759dc8c943b774bef91686c86def1017fad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 13:05:38 np0005596060 systemd[1]: libpod-conmon-93a7993c3bacef5dd2d211c73969759dc8c943b774bef91686c86def1017fad9.scope: Deactivated successfully.
Jan 26 13:05:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:05:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:05:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:05:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:05:39 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev cb50772d-31b7-45c6-8edf-cfd7df1e7f0c does not exist
Jan 26 13:05:39 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 01979443-5e27-407e-aa26-068bb82f6aec does not exist
Jan 26 13:05:39 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev df5039bd-7b58-4b7d-bcdc-c967de573060 does not exist
Jan 26 13:05:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:39.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:39.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Jan 26 13:05:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:05:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:05:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:05:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1168467091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:05:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:05:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1168467091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:05:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:41.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:41.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 5.7 KiB/s rd, 2.1 MiB/s wr, 8 op/s
Jan 26 13:05:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:43.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:43.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.9 KiB/s rd, 614 B/s wr, 6 op/s
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:05:44
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.mgr', 'vms', '.rgw.root', 'volumes', 'images']
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:05:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:05:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:05:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:45.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:05:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.2 KiB/s rd, 524 B/s wr, 5 op/s
Jan 26 13:05:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:45.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:47.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 4.1 KiB/s rd, 510 B/s wr, 5 op/s
Jan 26 13:05:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:05:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:47.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:05:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:49.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:49 np0005596060 nova_compute[247421]: 2026-01-26 18:05:49.647 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:05:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:49.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:05:50.774 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:05:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:05:50.776 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:05:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:51.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:51 np0005596060 nova_compute[247421]: 2026-01-26 18:05:51.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:05:51 np0005596060 nova_compute[247421]: 2026-01-26 18:05:51.659 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:05:51 np0005596060 nova_compute[247421]: 2026-01-26 18:05:51.660 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:05:51 np0005596060 nova_compute[247421]: 2026-01-26 18:05:51.660 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:05:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:51.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.396 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "3110b92c-0f4b-4f03-8991-a8106cdbe99d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.397 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "3110b92c-0f4b-4f03-8991-a8106cdbe99d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.418 247428 DEBUG nova.compute.manager [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.542 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.544 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.553 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.554 247428 INFO nova.compute.claims [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.654 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.655 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.655 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.670 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.670 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.671 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.693 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.694 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.695 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:05:52 np0005596060 nova_compute[247421]: 2026-01-26 18:05:52.715 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:05:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:05:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/819311566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.155 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.163 247428 DEBUG nova.compute.provider_tree [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.179 247428 DEBUG nova.scheduler.client.report [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.201 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.202 247428 DEBUG nova.compute.manager [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.204 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.489s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.204 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.204 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.205 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.287 247428 DEBUG nova.compute.manager [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.312 247428 INFO nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.335 247428 DEBUG nova.compute.manager [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.420 247428 DEBUG nova.compute.manager [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.422 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.422 247428 INFO nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Creating image(s)#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.447 247428 DEBUG nova.storage.rbd_utils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.474 247428 DEBUG nova.storage.rbd_utils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.502 247428 DEBUG nova.storage.rbd_utils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:05:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:53.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.506 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.507 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:05:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:05:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046435177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.630 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.808 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.809 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.809 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.810 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.871 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 3110b92c-0f4b-4f03-8991-a8106cdbe99d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.871 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.871 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:05:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:53 np0005596060 nova_compute[247421]: 2026-01-26 18:05:53.908 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:05:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:53.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:05:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1711619413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:05:54 np0005596060 nova_compute[247421]: 2026-01-26 18:05:54.309 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:05:54 np0005596060 nova_compute[247421]: 2026-01-26 18:05:54.315 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:05:54 np0005596060 nova_compute[247421]: 2026-01-26 18:05:54.341 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:05:54 np0005596060 nova_compute[247421]: 2026-01-26 18:05:54.368 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:05:54 np0005596060 nova_compute[247421]: 2026-01-26 18:05:54.369 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:05:54 np0005596060 nova_compute[247421]: 2026-01-26 18:05:54.578 247428 DEBUG nova.virt.libvirt.imagebackend [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Image locations are: [{'url': 'rbd://d4cd1917-5876-51b6-bc64-65a16199754d/images/57de5960-c1c5-4cfa-af34-8f58cf25f585/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d4cd1917-5876-51b6-bc64-65a16199754d/images/57de5960-c1c5-4cfa-af34-8f58cf25f585/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 26 13:05:55 np0005596060 nova_compute[247421]: 2026-01-26 18:05:55.326 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:05:55 np0005596060 nova_compute[247421]: 2026-01-26 18:05:55.326 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:05:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:55.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:05:55.778 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:05:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:05:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:55.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:56 np0005596060 nova_compute[247421]: 2026-01-26 18:05:56.670 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:05:56 np0005596060 nova_compute[247421]: 2026-01-26 18:05:56.753 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216.part --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:05:56 np0005596060 nova_compute[247421]: 2026-01-26 18:05:56.755 247428 DEBUG nova.virt.images [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] 57de5960-c1c5-4cfa-af34-8f58cf25f585 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 26 13:05:56 np0005596060 nova_compute[247421]: 2026-01-26 18:05:56.756 247428 DEBUG nova.privsep.utils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 26 13:05:56 np0005596060 nova_compute[247421]: 2026-01-26 18:05:56.757 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216.part /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:05:56 np0005596060 nova_compute[247421]: 2026-01-26 18:05:56.951 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216.part /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216.converted" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:05:56 np0005596060 nova_compute[247421]: 2026-01-26 18:05:56.958 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:05:57 np0005596060 nova_compute[247421]: 2026-01-26 18:05:57.050 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216.converted --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:05:57 np0005596060 nova_compute[247421]: 2026-01-26 18:05:57.051 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:05:57 np0005596060 nova_compute[247421]: 2026-01-26 18:05:57.083 247428 DEBUG nova.storage.rbd_utils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:05:57 np0005596060 nova_compute[247421]: 2026-01-26 18:05:57.088 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:05:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:57.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Jan 26 13:05:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:05:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:57.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:05:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 26 13:05:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 26 13:05:58 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 26 13:05:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:05:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 26 13:05:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 26 13:05:59 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.489 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:05:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:05:59.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.587 247428 DEBUG nova.storage.rbd_utils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] resizing rbd image 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.720 247428 DEBUG nova.objects.instance [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lazy-loading 'migration_context' on Instance uuid 3110b92c-0f4b-4f03-8991-a8106cdbe99d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.736 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.736 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Ensure instance console log exists: /var/lib/nova/instances/3110b92c-0f4b-4f03-8991-a8106cdbe99d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.737 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.738 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.738 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.742 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.748 247428 WARNING nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.755 247428 DEBUG nova.virt.libvirt.host [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.756 247428 DEBUG nova.virt.libvirt.host [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.760 247428 DEBUG nova.virt.libvirt.host [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.761 247428 DEBUG nova.virt.libvirt.host [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.764 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.764 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.765 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.766 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.766 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.767 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.767 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.768 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.768 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.769 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.770 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.770 247428 DEBUG nova.virt.hardware [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.777 247428 DEBUG nova.privsep.utils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 26 13:05:59 np0005596060 nova_compute[247421]: 2026-01-26 18:05:59.777 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:05:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 10 op/s
Jan 26 13:05:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:05:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:05:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:05:59.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:06:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3270163478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:06:00 np0005596060 nova_compute[247421]: 2026-01-26 18:06:00.274 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:00 np0005596060 nova_compute[247421]: 2026-01-26 18:06:00.302 247428 DEBUG nova.storage.rbd_utils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:06:00 np0005596060 nova_compute[247421]: 2026-01-26 18:06:00.307 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:06:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/821339984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:06:00 np0005596060 nova_compute[247421]: 2026-01-26 18:06:00.745 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:00 np0005596060 nova_compute[247421]: 2026-01-26 18:06:00.748 247428 DEBUG nova.objects.instance [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3110b92c-0f4b-4f03-8991-a8106cdbe99d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:06:00 np0005596060 nova_compute[247421]: 2026-01-26 18:06:00.772 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <uuid>3110b92c-0f4b-4f03-8991-a8106cdbe99d</uuid>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <name>instance-00000001</name>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <nova:name>tempest-AutoAllocateNetworkTest-server-42629869</nova:name>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:05:59</nova:creationTime>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <nova:user uuid="44d840a696d1433d91d7424baebdfd6b">tempest-AutoAllocateNetworkTest-1369791216-project-member</nova:user>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <nova:project uuid="0edb4019e89c4674848ec75122984916">tempest-AutoAllocateNetworkTest-1369791216</nova:project>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <nova:ports/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <entry name="serial">3110b92c-0f4b-4f03-8991-a8106cdbe99d</entry>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <entry name="uuid">3110b92c-0f4b-4f03-8991-a8106cdbe99d</entry>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk.config">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/3110b92c-0f4b-4f03-8991-a8106cdbe99d/console.log" append="off"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:06:00 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:06:00 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:06:00 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:06:00 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:06:00 np0005596060 nova_compute[247421]: 2026-01-26 18:06:00.858 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:06:00 np0005596060 nova_compute[247421]: 2026-01-26 18:06:00.859 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:06:00 np0005596060 nova_compute[247421]: 2026-01-26 18:06:00.859 247428 INFO nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Using config drive#033[00m
Jan 26 13:06:00 np0005596060 nova_compute[247421]: 2026-01-26 18:06:00.885 247428 DEBUG nova.storage.rbd_utils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:06:01 np0005596060 nova_compute[247421]: 2026-01-26 18:06:01.457 247428 INFO nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Creating config drive at /var/lib/nova/instances/3110b92c-0f4b-4f03-8991-a8106cdbe99d/disk.config#033[00m
Jan 26 13:06:01 np0005596060 nova_compute[247421]: 2026-01-26 18:06:01.464 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3110b92c-0f4b-4f03-8991-a8106cdbe99d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0xocyat execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:01.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:01 np0005596060 nova_compute[247421]: 2026-01-26 18:06:01.600 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3110b92c-0f4b-4f03-8991-a8106cdbe99d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0xocyat" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:01 np0005596060 nova_compute[247421]: 2026-01-26 18:06:01.628 247428 DEBUG nova.storage.rbd_utils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:06:01 np0005596060 nova_compute[247421]: 2026-01-26 18:06:01.632 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3110b92c-0f4b-4f03-8991-a8106cdbe99d/disk.config 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:01 np0005596060 nova_compute[247421]: 2026-01-26 18:06:01.813 247428 DEBUG oslo_concurrency.processutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3110b92c-0f4b-4f03-8991-a8106cdbe99d/disk.config 3110b92c-0f4b-4f03-8991-a8106cdbe99d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:01 np0005596060 nova_compute[247421]: 2026-01-26 18:06:01.815 247428 INFO nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Deleting local config drive /var/lib/nova/instances/3110b92c-0f4b-4f03-8991-a8106cdbe99d/disk.config because it was imported into RBD.#033[00m
Jan 26 13:06:01 np0005596060 systemd[1]: Starting libvirt secret daemon...
Jan 26 13:06:01 np0005596060 systemd[1]: Started libvirt secret daemon.
Jan 26 13:06:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 51 MiB data, 197 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 325 KiB/s wr, 28 op/s
Jan 26 13:06:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:01.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:01 np0005596060 systemd-machined[213879]: New machine qemu-1-instance-00000001.
Jan 26 13:06:01 np0005596060 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 26 13:06:02 np0005596060 podman[252924]: 2026-01-26 18:06:02.040028367 +0000 UTC m=+0.065553023 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.405 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450762.403356, 3110b92c-0f4b-4f03-8991-a8106cdbe99d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.405 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.416 247428 DEBUG nova.compute.manager [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.417 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.420 247428 INFO nova.virt.libvirt.driver [-] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Instance spawned successfully.#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.421 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:06:02 np0005596060 podman[252989]: 2026-01-26 18:06:02.433710295 +0000 UTC m=+0.102465647 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.439 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.444 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.447 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.448 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.448 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.449 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.449 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.450 247428 DEBUG nova.virt.libvirt.driver [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.468 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.469 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450762.4158719, 3110b92c-0f4b-4f03-8991-a8106cdbe99d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.469 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] VM Started (Lifecycle Event)#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.488 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.491 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.517 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.533 247428 INFO nova.compute.manager [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Took 9.11 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.534 247428 DEBUG nova.compute.manager [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.583 247428 INFO nova.compute.manager [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Took 10.08 seconds to build instance.#033[00m
Jan 26 13:06:02 np0005596060 nova_compute[247421]: 2026-01-26 18:06:02.602 247428 DEBUG oslo_concurrency.lockutils [None req-3f4f8dfc-37b9-4c5a-ba0b-dc3c753e4fb7 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "3110b92c-0f4b-4f03-8991-a8106cdbe99d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:03.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00011723112088218872 of space, bias 1.0, pg target 0.03516933626465661 quantized to 32 (current 32)
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:06:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 64 op/s
Jan 26 13:06:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:03.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:05.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.7 MiB/s wr, 54 op/s
Jan 26 13:06:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:05.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:07.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 122 op/s
Jan 26 13:06:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:07.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.101 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.102 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.128 247428 DEBUG nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.189 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.189 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.194 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.195 247428 INFO nova.compute.claims [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.317 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 26 13:06:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 26 13:06:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 26 13:06:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:06:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/110506731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.815 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.821 247428 DEBUG nova.compute.provider_tree [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.873 247428 ERROR nova.scheduler.client.report [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [req-09f52d0c-1ecd-47b1-b112-6acab70ac426] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID c679f5ea-e093-4909-bb04-0342c8551a8f.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-09f52d0c-1ecd-47b1-b112-6acab70ac426"}]}#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.897 247428 DEBUG nova.scheduler.client.report [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Refreshing inventories for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.917 247428 DEBUG nova.scheduler.client.report [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Updating ProviderTree inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.917 247428 DEBUG nova.compute.provider_tree [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.930 247428 DEBUG nova.scheduler.client.report [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Refreshing aggregate associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 26 13:06:08 np0005596060 nova_compute[247421]: 2026-01-26 18:06:08.978 247428 DEBUG nova.scheduler.client.report [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Refreshing trait associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.030 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:06:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2819407140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.460 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.465 247428 DEBUG nova.compute.provider_tree [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.515 247428 DEBUG nova.scheduler.client.report [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Updated inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.516 247428 DEBUG nova.compute.provider_tree [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Updating resource provider c679f5ea-e093-4909-bb04-0342c8551a8f generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.516 247428 DEBUG nova.compute.provider_tree [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:06:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:09.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.547 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.358s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.548 247428 DEBUG nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.597 247428 DEBUG nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.597 247428 DEBUG nova.network.neutron [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.615 247428 INFO nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.637 247428 DEBUG nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.739 247428 DEBUG nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.741 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.742 247428 INFO nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Creating image(s)#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.779 247428 DEBUG nova.storage.rbd_utils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.811 247428 DEBUG nova.storage.rbd_utils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.842 247428 DEBUG nova.storage.rbd_utils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.846 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 88 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.911 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.911 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.912 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.912 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.941 247428 DEBUG nova.storage.rbd_utils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:06:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:09.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:09 np0005596060 nova_compute[247421]: 2026-01-26 18:06:09.944 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:10 np0005596060 nova_compute[247421]: 2026-01-26 18:06:10.521 247428 DEBUG nova.network.neutron [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Automatically allocating a network for project 0edb4019e89c4674848ec75122984916. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Jan 26 13:06:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:11.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:11 np0005596060 nova_compute[247421]: 2026-01-26 18:06:11.536 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:11 np0005596060 nova_compute[247421]: 2026-01-26 18:06:11.619 247428 DEBUG nova.storage.rbd_utils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] resizing rbd image 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:06:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 103 MiB data, 216 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 126 op/s
Jan 26 13:06:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:11.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:12 np0005596060 nova_compute[247421]: 2026-01-26 18:06:12.004 247428 DEBUG nova.objects.instance [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lazy-loading 'migration_context' on Instance uuid 4efe084b-d35c-4dbf-b539-1e82b9baf9f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:06:12 np0005596060 nova_compute[247421]: 2026-01-26 18:06:12.037 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:06:12 np0005596060 nova_compute[247421]: 2026-01-26 18:06:12.038 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Ensure instance console log exists: /var/lib/nova/instances/4efe084b-d35c-4dbf-b539-1e82b9baf9f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:06:12 np0005596060 nova_compute[247421]: 2026-01-26 18:06:12.039 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:12 np0005596060 nova_compute[247421]: 2026-01-26 18:06:12.040 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:12 np0005596060 nova_compute[247421]: 2026-01-26 18:06:12.040 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:13.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 134 MiB data, 234 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Jan 26 13:06:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:13.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:06:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:06:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:06:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:06:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:06:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:06:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:14.738 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:14.739 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:14.739 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:15.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 134 MiB data, 234 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Jan 26 13:06:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:15.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:17.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.8 MiB/s wr, 151 op/s
Jan 26 13:06:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.006000149s ======
Jan 26 13:06:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:17.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.006000149s
Jan 26 13:06:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 133 op/s
Jan 26 13:06:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:19.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:21.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 126 op/s
Jan 26 13:06:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:21.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:23.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.623459) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450783623532, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1350, "num_deletes": 250, "total_data_size": 2210204, "memory_usage": 2247352, "flush_reason": "Manual Compaction"}
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450783636536, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1347920, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19482, "largest_seqno": 20831, "table_properties": {"data_size": 1342900, "index_size": 2352, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12765, "raw_average_key_size": 20, "raw_value_size": 1331908, "raw_average_value_size": 2141, "num_data_blocks": 105, "num_entries": 622, "num_filter_entries": 622, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769450660, "oldest_key_time": 1769450660, "file_creation_time": 1769450783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 13116 microseconds, and 6926 cpu microseconds.
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.636589) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1347920 bytes OK
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.636611) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.637738) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.637753) EVENT_LOG_v1 {"time_micros": 1769450783637748, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.637771) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 2204308, prev total WAL file size 2204308, number of live WAL files 2.
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.638458) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1316KB)], [44(9712KB)]
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450783638500, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 11293287, "oldest_snapshot_seqno": -1}
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4697 keys, 8272691 bytes, temperature: kUnknown
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450783695989, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8272691, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8241172, "index_size": 18673, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11781, "raw_key_size": 117331, "raw_average_key_size": 24, "raw_value_size": 8155942, "raw_average_value_size": 1736, "num_data_blocks": 769, "num_entries": 4697, "num_filter_entries": 4697, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769450783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.696380) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8272691 bytes
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.697806) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.1 rd, 143.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(14.5) write-amplify(6.1) OK, records in: 5160, records dropped: 463 output_compression: NoCompression
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.697834) EVENT_LOG_v1 {"time_micros": 1769450783697822, "job": 22, "event": "compaction_finished", "compaction_time_micros": 57584, "compaction_time_cpu_micros": 18409, "output_level": 6, "num_output_files": 1, "total_output_size": 8272691, "num_input_records": 5160, "num_output_records": 4697, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450783698266, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450783700612, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.638397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.700700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.700707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.700708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.700710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:06:23 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:06:23.700712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:06:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 782 KiB/s rd, 5.4 MiB/s wr, 109 op/s
Jan 26 13:06:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:23.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:06:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:25.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:06:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 393 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 26 13:06:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:25.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:26 np0005596060 nova_compute[247421]: 2026-01-26 18:06:26.638 247428 DEBUG nova.network.neutron [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Automatically allocated network: {'id': '0233ae30-2e5a-4e12-9142-37047ec40cce', 'name': 'auto_allocated_network', 'tenant_id': '0edb4019e89c4674848ec75122984916', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['67039721-515c-4cde-ae20-7ece9fb11b87', 'f031f499-16a3-416c-9fdf-487a31751487'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-26T18:06:10Z', 'updated_at': '2026-01-26T18:06:20Z', 'revision_number': 4, 'project_id': '0edb4019e89c4674848ec75122984916'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Jan 26 13:06:26 np0005596060 nova_compute[247421]: 2026-01-26 18:06:26.652 247428 WARNING oslo_policy.policy [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 26 13:06:26 np0005596060 nova_compute[247421]: 2026-01-26 18:06:26.653 247428 WARNING oslo_policy.policy [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 26 13:06:26 np0005596060 nova_compute[247421]: 2026-01-26 18:06:26.656 247428 DEBUG nova.policy [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '44d840a696d1433d91d7424baebdfd6b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0edb4019e89c4674848ec75122984916', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 26 13:06:27 np0005596060 nova_compute[247421]: 2026-01-26 18:06:27.537 247428 DEBUG nova.network.neutron [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Successfully created port: 8b22a859-a612-4861-af28-07ae72a5e29c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:06:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:27.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 393 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 26 13:06:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:27.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:28 np0005596060 nova_compute[247421]: 2026-01-26 18:06:28.561 247428 DEBUG nova.network.neutron [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Successfully updated port: 8b22a859-a612-4861-af28-07ae72a5e29c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:06:28 np0005596060 nova_compute[247421]: 2026-01-26 18:06:28.572 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "refresh_cache-4efe084b-d35c-4dbf-b539-1e82b9baf9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:06:28 np0005596060 nova_compute[247421]: 2026-01-26 18:06:28.572 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquired lock "refresh_cache-4efe084b-d35c-4dbf-b539-1e82b9baf9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:06:28 np0005596060 nova_compute[247421]: 2026-01-26 18:06:28.573 247428 DEBUG nova.network.neutron [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:06:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:28 np0005596060 nova_compute[247421]: 2026-01-26 18:06:28.853 247428 DEBUG nova.network.neutron [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:06:29 np0005596060 nova_compute[247421]: 2026-01-26 18:06:29.101 247428 DEBUG nova.compute.manager [req-a5f4d1d5-347e-48e7-a3af-9329a2eefe59 req-d07120cf-7f64-4290-946f-3721cd5452b1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Received event network-changed-8b22a859-a612-4861-af28-07ae72a5e29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:06:29 np0005596060 nova_compute[247421]: 2026-01-26 18:06:29.102 247428 DEBUG nova.compute.manager [req-a5f4d1d5-347e-48e7-a3af-9329a2eefe59 req-d07120cf-7f64-4290-946f-3721cd5452b1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Refreshing instance network info cache due to event network-changed-8b22a859-a612-4861-af28-07ae72a5e29c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:06:29 np0005596060 nova_compute[247421]: 2026-01-26 18:06:29.102 247428 DEBUG oslo_concurrency.lockutils [req-a5f4d1d5-347e-48e7-a3af-9329a2eefe59 req-d07120cf-7f64-4290-946f-3721cd5452b1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-4efe084b-d35c-4dbf-b539-1e82b9baf9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:06:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s wr, 0 op/s
Jan 26 13:06:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:29.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.860 247428 DEBUG nova.network.neutron [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Updating instance_info_cache with network_info: [{"id": "8b22a859-a612-4861-af28-07ae72a5e29c", "address": "fa:16:3e:2d:74:c7", "network": {"id": "0233ae30-2e5a-4e12-9142-37047ec40cce", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb4019e89c4674848ec75122984916", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b22a859-a6", "ovs_interfaceid": "8b22a859-a612-4861-af28-07ae72a5e29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.889 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Releasing lock "refresh_cache-4efe084b-d35c-4dbf-b539-1e82b9baf9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.889 247428 DEBUG nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Instance network_info: |[{"id": "8b22a859-a612-4861-af28-07ae72a5e29c", "address": "fa:16:3e:2d:74:c7", "network": {"id": "0233ae30-2e5a-4e12-9142-37047ec40cce", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb4019e89c4674848ec75122984916", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b22a859-a6", "ovs_interfaceid": "8b22a859-a612-4861-af28-07ae72a5e29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.889 247428 DEBUG oslo_concurrency.lockutils [req-a5f4d1d5-347e-48e7-a3af-9329a2eefe59 req-d07120cf-7f64-4290-946f-3721cd5452b1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-4efe084b-d35c-4dbf-b539-1e82b9baf9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.890 247428 DEBUG nova.network.neutron [req-a5f4d1d5-347e-48e7-a3af-9329a2eefe59 req-d07120cf-7f64-4290-946f-3721cd5452b1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Refreshing network info cache for port 8b22a859-a612-4861-af28-07ae72a5e29c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.893 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Start _get_guest_xml network_info=[{"id": "8b22a859-a612-4861-af28-07ae72a5e29c", "address": "fa:16:3e:2d:74:c7", "network": {"id": "0233ae30-2e5a-4e12-9142-37047ec40cce", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb4019e89c4674848ec75122984916", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b22a859-a6", "ovs_interfaceid": "8b22a859-a612-4861-af28-07ae72a5e29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.898 247428 WARNING nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.906 247428 DEBUG nova.virt.libvirt.host [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.907 247428 DEBUG nova.virt.libvirt.host [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.915 247428 DEBUG nova.virt.libvirt.host [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.916 247428 DEBUG nova.virt.libvirt.host [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.917 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.917 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.918 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.918 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.918 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.918 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.918 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.919 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.919 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.919 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.919 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.919 247428 DEBUG nova.virt.hardware [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:06:30 np0005596060 nova_compute[247421]: 2026-01-26 18:06:30.922 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:06:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642080015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.351 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.385 247428 DEBUG nova.storage.rbd_utils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.389 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 26 13:06:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 26 13:06:31 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 26 13:06:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:06:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71239157' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.860 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.863 247428 DEBUG nova.virt.libvirt.vif [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:06:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1013927775-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1013927775-3',id=3,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb4019e89c4674848ec75122984916',ramdisk_id='',reservation_id='r-872mjeag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1369791216',owner_user_name='tempest-AutoAllocateNetworkTest-1369791216-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:06:09Z,user_data=None,user_id='44d840a696d1433d91d7424baebdfd6b',uuid=4efe084b-d35c-4dbf-b539-1e82b9baf9f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b22a859-a612-4861-af28-07ae72a5e29c", "address": "fa:16:3e:2d:74:c7", "network": {"id": "0233ae30-2e5a-4e12-9142-37047ec40cce", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb4019e89c4674848ec75122984916", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b22a859-a6", "ovs_interfaceid": "8b22a859-a612-4861-af28-07ae72a5e29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.863 247428 DEBUG nova.network.os_vif_util [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Converting VIF {"id": "8b22a859-a612-4861-af28-07ae72a5e29c", "address": "fa:16:3e:2d:74:c7", "network": {"id": "0233ae30-2e5a-4e12-9142-37047ec40cce", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb4019e89c4674848ec75122984916", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b22a859-a6", "ovs_interfaceid": "8b22a859-a612-4861-af28-07ae72a5e29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.864 247428 DEBUG nova.network.os_vif_util [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:74:c7,bridge_name='br-int',has_traffic_filtering=True,id=8b22a859-a612-4861-af28-07ae72a5e29c,network=Network(0233ae30-2e5a-4e12-9142-37047ec40cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b22a859-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.867 247428 DEBUG nova.objects.instance [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4efe084b-d35c-4dbf-b539-1e82b9baf9f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.886 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <uuid>4efe084b-d35c-4dbf-b539-1e82b9baf9f2</uuid>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <name>instance-00000003</name>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <nova:name>tempest-tempest.common.compute-instance-1013927775-3</nova:name>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:06:30</nova:creationTime>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <nova:user uuid="44d840a696d1433d91d7424baebdfd6b">tempest-AutoAllocateNetworkTest-1369791216-project-member</nova:user>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <nova:project uuid="0edb4019e89c4674848ec75122984916">tempest-AutoAllocateNetworkTest-1369791216</nova:project>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <nova:port uuid="8b22a859-a612-4861-af28-07ae72a5e29c">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.1.0.11" ipVersion="4"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="fdfe:381f:8400::24" ipVersion="6"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <entry name="serial">4efe084b-d35c-4dbf-b539-1e82b9baf9f2</entry>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <entry name="uuid">4efe084b-d35c-4dbf-b539-1e82b9baf9f2</entry>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk.config">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:2d:74:c7"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <target dev="tap8b22a859-a6"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/4efe084b-d35c-4dbf-b539-1e82b9baf9f2/console.log" append="off"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:06:31 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:06:31 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:06:31 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:06:31 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.889 247428 DEBUG nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Preparing to wait for external event network-vif-plugged-8b22a859-a612-4861-af28-07ae72a5e29c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.889 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.890 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.890 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.892 247428 DEBUG nova.virt.libvirt.vif [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:06:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1013927775-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1013927775-3',id=3,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0edb4019e89c4674848ec75122984916',ramdisk_id='',reservation_id='r-872mjeag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-1369791216',owner_user_name='tempest-AutoAllocateNetworkTest-1369791216-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:06:09Z,user_data=None,user_id='44d840a696d1433d91d7424baebdfd6b',uuid=4efe084b-d35c-4dbf-b539-1e82b9baf9f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8b22a859-a612-4861-af28-07ae72a5e29c", "address": "fa:16:3e:2d:74:c7", "network": {"id": "0233ae30-2e5a-4e12-9142-37047ec40cce", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb4019e89c4674848ec75122984916", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b22a859-a6", "ovs_interfaceid": "8b22a859-a612-4861-af28-07ae72a5e29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.892 247428 DEBUG nova.network.os_vif_util [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Converting VIF {"id": "8b22a859-a612-4861-af28-07ae72a5e29c", "address": "fa:16:3e:2d:74:c7", "network": {"id": "0233ae30-2e5a-4e12-9142-37047ec40cce", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb4019e89c4674848ec75122984916", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b22a859-a6", "ovs_interfaceid": "8b22a859-a612-4861-af28-07ae72a5e29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.894 247428 DEBUG nova.network.os_vif_util [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:74:c7,bridge_name='br-int',has_traffic_filtering=True,id=8b22a859-a612-4861-af28-07ae72a5e29c,network=Network(0233ae30-2e5a-4e12-9142-37047ec40cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b22a859-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.894 247428 DEBUG os_vif [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:74:c7,bridge_name='br-int',has_traffic_filtering=True,id=8b22a859-a612-4861-af28-07ae72a5e29c,network=Network(0233ae30-2e5a-4e12-9142-37047ec40cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b22a859-a6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:06:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 307 B/s rd, 27 KiB/s wr, 0 op/s
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.953 247428 DEBUG ovsdbapp.backend.ovs_idl [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.954 247428 DEBUG ovsdbapp.backend.ovs_idl [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.954 247428 DEBUG ovsdbapp.backend.ovs_idl [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.955 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.955 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.956 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.956 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.958 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.961 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.973 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.974 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.974 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:06:31 np0005596060 nova_compute[247421]: 2026-01-26 18:06:31.976 247428 INFO oslo.privsep.daemon [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmplklzp_gx/privsep.sock']#033[00m
Jan 26 13:06:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:31.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 26 13:06:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 26 13:06:32 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 26 13:06:32 np0005596060 podman[253407]: 2026-01-26 18:06:32.799279214 +0000 UTC m=+0.047178336 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 26 13:06:32 np0005596060 nova_compute[247421]: 2026-01-26 18:06:32.816 247428 INFO oslo.privsep.daemon [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 26 13:06:32 np0005596060 nova_compute[247421]: 2026-01-26 18:06:32.651 253406 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 26 13:06:32 np0005596060 nova_compute[247421]: 2026-01-26 18:06:32.656 253406 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 26 13:06:32 np0005596060 nova_compute[247421]: 2026-01-26 18:06:32.658 253406 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Jan 26 13:06:32 np0005596060 nova_compute[247421]: 2026-01-26 18:06:32.658 253406 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253406#033[00m
Jan 26 13:06:32 np0005596060 podman[253408]: 2026-01-26 18:06:32.823278207 +0000 UTC m=+0.081297199 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.127 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.128 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b22a859-a6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.129 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8b22a859-a6, col_values=(('external_ids', {'iface-id': '8b22a859-a612-4861-af28-07ae72a5e29c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:74:c7', 'vm-uuid': '4efe084b-d35c-4dbf-b539-1e82b9baf9f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.131 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:33 np0005596060 NetworkManager[48900]: <info>  [1769450793.1319] manager: (tap8b22a859-a6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.135 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.138 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.139 247428 INFO os_vif [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:74:c7,bridge_name='br-int',has_traffic_filtering=True,id=8b22a859-a612-4861-af28-07ae72a5e29c,network=Network(0233ae30-2e5a-4e12-9142-37047ec40cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b22a859-a6')#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.188 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.189 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.189 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] No VIF found with MAC fa:16:3e:2d:74:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.190 247428 INFO nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Using config drive#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.217 247428 DEBUG nova.storage.rbd_utils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.242 247428 DEBUG nova.network.neutron [req-a5f4d1d5-347e-48e7-a3af-9329a2eefe59 req-d07120cf-7f64-4290-946f-3721cd5452b1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Updated VIF entry in instance network info cache for port 8b22a859-a612-4861-af28-07ae72a5e29c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.243 247428 DEBUG nova.network.neutron [req-a5f4d1d5-347e-48e7-a3af-9329a2eefe59 req-d07120cf-7f64-4290-946f-3721cd5452b1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Updating instance_info_cache with network_info: [{"id": "8b22a859-a612-4861-af28-07ae72a5e29c", "address": "fa:16:3e:2d:74:c7", "network": {"id": "0233ae30-2e5a-4e12-9142-37047ec40cce", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb4019e89c4674848ec75122984916", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b22a859-a6", "ovs_interfaceid": "8b22a859-a612-4861-af28-07ae72a5e29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.281 247428 DEBUG oslo_concurrency.lockutils [req-a5f4d1d5-347e-48e7-a3af-9329a2eefe59 req-d07120cf-7f64-4290-946f-3721cd5452b1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-4efe084b-d35c-4dbf-b539-1e82b9baf9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.483 247428 DEBUG oslo_concurrency.processutils [None req-867ff7fc-d669-4b50-af08-85430b80dbd1 e9de463f12a8431bb9fdc4842a38e4d0 0f932655b8d5434483371e60b4e048a2 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.508 247428 DEBUG oslo_concurrency.processutils [None req-867ff7fc-d669-4b50-af08-85430b80dbd1 e9de463f12a8431bb9fdc4842a38e4d0 0f932655b8d5434483371e60b4e048a2 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.769 247428 INFO nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Creating config drive at /var/lib/nova/instances/4efe084b-d35c-4dbf-b539-1e82b9baf9f2/disk.config#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.775 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4efe084b-d35c-4dbf-b539-1e82b9baf9f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5c_qsl4h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.902 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4efe084b-d35c-4dbf-b539-1e82b9baf9f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5c_qsl4h" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 19 KiB/s wr, 13 op/s
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.930 247428 DEBUG nova.storage.rbd_utils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] rbd image 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:06:33 np0005596060 nova_compute[247421]: 2026-01-26 18:06:33.934 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4efe084b-d35c-4dbf-b539-1e82b9baf9f2/disk.config 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:33.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:34 np0005596060 nova_compute[247421]: 2026-01-26 18:06:34.097 247428 DEBUG oslo_concurrency.processutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4efe084b-d35c-4dbf-b539-1e82b9baf9f2/disk.config 4efe084b-d35c-4dbf-b539-1e82b9baf9f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:34 np0005596060 nova_compute[247421]: 2026-01-26 18:06:34.098 247428 INFO nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Deleting local config drive /var/lib/nova/instances/4efe084b-d35c-4dbf-b539-1e82b9baf9f2/disk.config because it was imported into RBD.#033[00m
Jan 26 13:06:34 np0005596060 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 26 13:06:34 np0005596060 kernel: tap8b22a859-a6: entered promiscuous mode
Jan 26 13:06:34 np0005596060 NetworkManager[48900]: <info>  [1769450794.1590] manager: (tap8b22a859-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 26 13:06:34 np0005596060 ovn_controller[148842]: 2026-01-26T18:06:34Z|00027|binding|INFO|Claiming lport 8b22a859-a612-4861-af28-07ae72a5e29c for this chassis.
Jan 26 13:06:34 np0005596060 ovn_controller[148842]: 2026-01-26T18:06:34Z|00028|binding|INFO|8b22a859-a612-4861-af28-07ae72a5e29c: Claiming fa:16:3e:2d:74:c7 10.1.0.11 fdfe:381f:8400::24
Jan 26 13:06:34 np0005596060 nova_compute[247421]: 2026-01-26 18:06:34.162 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:34 np0005596060 nova_compute[247421]: 2026-01-26 18:06:34.167 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.179 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:74:c7 10.1.0.11 fdfe:381f:8400::24'], port_security=['fa:16:3e:2d:74:c7 10.1.0.11 fdfe:381f:8400::24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.11/26 fdfe:381f:8400::24/64', 'neutron:device_id': '4efe084b-d35c-4dbf-b539-1e82b9baf9f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0233ae30-2e5a-4e12-9142-37047ec40cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb4019e89c4674848ec75122984916', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b8b977b7-e75f-401b-bfd0-7066aad28c16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4f9fdf8-90b6-44b5-be73-6e7a7109730a, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=8b22a859-a612-4861-af28-07ae72a5e29c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.181 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 8b22a859-a612-4861-af28-07ae72a5e29c in datapath 0233ae30-2e5a-4e12-9142-37047ec40cce bound to our chassis#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.184 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0233ae30-2e5a-4e12-9142-37047ec40cce#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.185 159331 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp2k4ky8u3/privsep.sock']#033[00m
Jan 26 13:06:34 np0005596060 systemd-udevd[253532]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:06:34 np0005596060 NetworkManager[48900]: <info>  [1769450794.2306] device (tap8b22a859-a6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:06:34 np0005596060 NetworkManager[48900]: <info>  [1769450794.2314] device (tap8b22a859-a6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:06:34 np0005596060 systemd-machined[213879]: New machine qemu-2-instance-00000003.
Jan 26 13:06:34 np0005596060 nova_compute[247421]: 2026-01-26 18:06:34.266 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:34 np0005596060 systemd[1]: Started Virtual Machine qemu-2-instance-00000003.
Jan 26 13:06:34 np0005596060 ovn_controller[148842]: 2026-01-26T18:06:34Z|00029|binding|INFO|Setting lport 8b22a859-a612-4861-af28-07ae72a5e29c ovn-installed in OVS
Jan 26 13:06:34 np0005596060 ovn_controller[148842]: 2026-01-26T18:06:34Z|00030|binding|INFO|Setting lport 8b22a859-a612-4861-af28-07ae72a5e29c up in Southbound
Jan 26 13:06:34 np0005596060 nova_compute[247421]: 2026-01-26 18:06:34.273 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.882 159331 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.883 159331 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp2k4ky8u3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.745 253549 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.753 253549 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.757 253549 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.758 253549 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253549#033[00m
Jan 26 13:06:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:34.887 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[1d834304-c940-4171-aaee-1b96782f8971]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.037 247428 DEBUG nova.compute.manager [req-164b5205-7a05-41cc-8245-d056489866f5 req-cf674c8a-b4db-4200-b03d-a870558ea7e3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Received event network-vif-plugged-8b22a859-a612-4861-af28-07ae72a5e29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.038 247428 DEBUG oslo_concurrency.lockutils [req-164b5205-7a05-41cc-8245-d056489866f5 req-cf674c8a-b4db-4200-b03d-a870558ea7e3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.038 247428 DEBUG oslo_concurrency.lockutils [req-164b5205-7a05-41cc-8245-d056489866f5 req-cf674c8a-b4db-4200-b03d-a870558ea7e3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.038 247428 DEBUG oslo_concurrency.lockutils [req-164b5205-7a05-41cc-8245-d056489866f5 req-cf674c8a-b4db-4200-b03d-a870558ea7e3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.039 247428 DEBUG nova.compute.manager [req-164b5205-7a05-41cc-8245-d056489866f5 req-cf674c8a-b4db-4200-b03d-a870558ea7e3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Processing event network-vif-plugged-8b22a859-a612-4861-af28-07ae72a5e29c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:06:35 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:35.474 253549 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:35 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:35.475 253549 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:35 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:35.475 253549 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.486 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.542 247428 DEBUG nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.543 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450795.5429022, 4efe084b-d35c-4dbf-b539-1e82b9baf9f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.543 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] VM Started (Lifecycle Event)#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.548 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.550 247428 INFO nova.virt.libvirt.driver [-] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Instance spawned successfully.#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.551 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:06:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.578 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.580 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.581 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.581 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.581 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.582 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.582 247428 DEBUG nova.virt.libvirt.driver [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.587 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.636 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.637 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450795.544143, 4efe084b-d35c-4dbf-b539-1e82b9baf9f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.637 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.655 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.659 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450795.5475376, 4efe084b-d35c-4dbf-b539-1e82b9baf9f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.659 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.663 247428 INFO nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Took 25.92 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.663 247428 DEBUG nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.697 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.700 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.729 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.744 247428 INFO nova.compute.manager [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Took 27.58 seconds to build instance.#033[00m
Jan 26 13:06:35 np0005596060 nova_compute[247421]: 2026-01-26 18:06:35.763 247428 DEBUG oslo_concurrency.lockutils [None req-9373a014-e8cb-476e-b3bc-a0550da424b1 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 213 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 19 KiB/s wr, 13 op/s
Jan 26 13:06:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.116 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[adff5fd8-7944-4642-b867-4b01774fdd01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.117 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0233ae30-21 in ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.119 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0233ae30-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.119 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[9a431480-bada-41a7-a94c-79abd228f913]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.123 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5e95654f-e8fa-4ab7-9084-5bf60977133b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.155 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[2b233292-14a8-4f98-b103-51bcb446f189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.182 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c98940-9328-45bd-a86d-517a61d8aa6d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.184 159331 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp7j1z9y31/privsep.sock']#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.847 159331 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.847 159331 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp7j1z9y31/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.722 253606 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.726 253606 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.727 253606 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.727 253606 INFO oslo.privsep.daemon [-] privsep daemon running as pid 253606#033[00m
Jan 26 13:06:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:36.849 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[646b2e81-c6b3-4633-b8bb-2fc976774b60]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:37 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:37.341 253606 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:37 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:37.341 253606 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:37 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:37.342 253606 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:37 np0005596060 nova_compute[247421]: 2026-01-26 18:06:37.367 247428 DEBUG nova.compute.manager [req-1a8f31be-da9a-42f2-b868-7d3470c8288c req-2e10f141-8e01-4c00-97bf-69ec035e9f9e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Received event network-vif-plugged-8b22a859-a612-4861-af28-07ae72a5e29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:06:37 np0005596060 nova_compute[247421]: 2026-01-26 18:06:37.370 247428 DEBUG oslo_concurrency.lockutils [req-1a8f31be-da9a-42f2-b868-7d3470c8288c req-2e10f141-8e01-4c00-97bf-69ec035e9f9e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:37 np0005596060 nova_compute[247421]: 2026-01-26 18:06:37.371 247428 DEBUG oslo_concurrency.lockutils [req-1a8f31be-da9a-42f2-b868-7d3470c8288c req-2e10f141-8e01-4c00-97bf-69ec035e9f9e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:37 np0005596060 nova_compute[247421]: 2026-01-26 18:06:37.372 247428 DEBUG oslo_concurrency.lockutils [req-1a8f31be-da9a-42f2-b868-7d3470c8288c req-2e10f141-8e01-4c00-97bf-69ec035e9f9e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:37 np0005596060 nova_compute[247421]: 2026-01-26 18:06:37.372 247428 DEBUG nova.compute.manager [req-1a8f31be-da9a-42f2-b868-7d3470c8288c req-2e10f141-8e01-4c00-97bf-69ec035e9f9e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] No waiting events found dispatching network-vif-plugged-8b22a859-a612-4861-af28-07ae72a5e29c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:06:37 np0005596060 nova_compute[247421]: 2026-01-26 18:06:37.373 247428 WARNING nova.compute.manager [req-1a8f31be-da9a-42f2-b868-7d3470c8288c req-2e10f141-8e01-4c00-97bf-69ec035e9f9e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Received unexpected event network-vif-plugged-8b22a859-a612-4861-af28-07ae72a5e29c for instance with vm_state active and task_state None.#033[00m
Jan 26 13:06:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:37 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:37.919 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[c36251cd-de37-4fbc-bb9b-159dbcb196b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 214 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 5.7 MiB/s rd, 38 KiB/s wr, 221 op/s
Jan 26 13:06:37 np0005596060 NetworkManager[48900]: <info>  [1769450797.9418] manager: (tap0233ae30-20): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 26 13:06:37 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:37.940 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[713e9ef9-d05d-4e79-8f5e-8d6ef671c581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:37 np0005596060 systemd-udevd[253619]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:06:37 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:37.984 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4b1851-3c3c-4c55-8de2-e3b0bade008e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:37.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:37 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:37.991 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4b9127-4758-4ec7-8475-eb139f9e63a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:38 np0005596060 NetworkManager[48900]: <info>  [1769450798.0241] device (tap0233ae30-20): carrier: link connected
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.029 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[80fb5287-3db3-4627-bd29-249ec53e6f74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.053 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b0388bcb-f47a-47bd-89bf-119a6d0cf671]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0233ae30-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:f4:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456260, 'reachable_time': 15505, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253637, 'error': None, 'target': 'ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.074 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b24a6da5-046d-415b-b835-7de75a2876c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:f458'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456260, 'tstamp': 456260}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253638, 'error': None, 'target': 'ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.095 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c81a36-0bbe-4206-a373-7ebe6e6f7d83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0233ae30-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:f4:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456260, 'reachable_time': 15505, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253639, 'error': None, 'target': 'ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:38 np0005596060 nova_compute[247421]: 2026-01-26 18:06:38.132 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.140 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e102d085-1b50-407d-a164-8416a2c1ce4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.215 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e0888007-ba6d-45e4-a709-c38c484a3ce1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.217 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0233ae30-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.218 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.218 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0233ae30-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:06:38 np0005596060 nova_compute[247421]: 2026-01-26 18:06:38.220 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:38 np0005596060 NetworkManager[48900]: <info>  [1769450798.2213] manager: (tap0233ae30-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 26 13:06:38 np0005596060 kernel: tap0233ae30-20: entered promiscuous mode
Jan 26 13:06:38 np0005596060 nova_compute[247421]: 2026-01-26 18:06:38.223 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.224 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0233ae30-20, col_values=(('external_ids', {'iface-id': '21642513-87eb-404c-8f9f-3b78ea6c1c25'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:06:38 np0005596060 nova_compute[247421]: 2026-01-26 18:06:38.225 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:38 np0005596060 ovn_controller[148842]: 2026-01-26T18:06:38Z|00031|binding|INFO|Releasing lport 21642513-87eb-404c-8f9f-3b78ea6c1c25 from this chassis (sb_readonly=0)
Jan 26 13:06:38 np0005596060 nova_compute[247421]: 2026-01-26 18:06:38.242 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.244 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0233ae30-2e5a-4e12-9142-37047ec40cce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0233ae30-2e5a-4e12-9142-37047ec40cce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.245 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[9dba11a5-4e92-4fb8-9296-26fbb4b54594]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.247 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-0233ae30-2e5a-4e12-9142-37047ec40cce
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/0233ae30-2e5a-4e12-9142-37047ec40cce.pid.haproxy
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 0233ae30-2e5a-4e12-9142-37047ec40cce
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:06:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:38.249 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce', 'env', 'PROCESS_TAG=haproxy-0233ae30-2e5a-4e12-9142-37047ec40cce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0233ae30-2e5a-4e12-9142-37047ec40cce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:06:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 26 13:06:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 26 13:06:38 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 26 13:06:38 np0005596060 podman[253672]: 2026-01-26 18:06:38.690424192 +0000 UTC m=+0.093593532 container create 5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 13:06:38 np0005596060 podman[253672]: 2026-01-26 18:06:38.625742465 +0000 UTC m=+0.028911825 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:06:38 np0005596060 systemd[1]: Started libpod-conmon-5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7.scope.
Jan 26 13:06:38 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:06:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe85f9f521d7b5c402fbfdee285b971190ce0982b6b8b25c47e1b74852f04e1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:38 np0005596060 podman[253672]: 2026-01-26 18:06:38.946154446 +0000 UTC m=+0.349323856 container init 5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 13:06:38 np0005596060 podman[253672]: 2026-01-26 18:06:38.956563934 +0000 UTC m=+0.359733314 container start 5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:06:38 np0005596060 neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce[253687]: [NOTICE]   (253691) : New worker (253693) forked
Jan 26 13:06:38 np0005596060 neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce[253687]: [NOTICE]   (253691) : Loading success.
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.241 247428 DEBUG oslo_concurrency.lockutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.242 247428 DEBUG oslo_concurrency.lockutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.242 247428 DEBUG oslo_concurrency.lockutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.242 247428 DEBUG oslo_concurrency.lockutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.243 247428 DEBUG oslo_concurrency.lockutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.244 247428 INFO nova.compute.manager [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Terminating instance#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.245 247428 DEBUG nova.compute.manager [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:06:39 np0005596060 kernel: tap8b22a859-a6 (unregistering): left promiscuous mode
Jan 26 13:06:39 np0005596060 NetworkManager[48900]: <info>  [1769450799.2948] device (tap8b22a859-a6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:06:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:06:39Z|00032|binding|INFO|Releasing lport 8b22a859-a612-4861-af28-07ae72a5e29c from this chassis (sb_readonly=0)
Jan 26 13:06:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:06:39Z|00033|binding|INFO|Setting lport 8b22a859-a612-4861-af28-07ae72a5e29c down in Southbound
Jan 26 13:06:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:06:39Z|00034|binding|INFO|Removing iface tap8b22a859-a6 ovn-installed in OVS
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.345 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.351 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:74:c7 10.1.0.11 fdfe:381f:8400::24'], port_security=['fa:16:3e:2d:74:c7 10.1.0.11 fdfe:381f:8400::24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.11/26 fdfe:381f:8400::24/64', 'neutron:device_id': '4efe084b-d35c-4dbf-b539-1e82b9baf9f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0233ae30-2e5a-4e12-9142-37047ec40cce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0edb4019e89c4674848ec75122984916', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b8b977b7-e75f-401b-bfd0-7066aad28c16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4f9fdf8-90b6-44b5-be73-6e7a7109730a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=8b22a859-a612-4861-af28-07ae72a5e29c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.352 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 8b22a859-a612-4861-af28-07ae72a5e29c in datapath 0233ae30-2e5a-4e12-9142-37047ec40cce unbound from our chassis#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.353 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0233ae30-2e5a-4e12-9142-37047ec40cce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.354 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd5da28-18cb-4a47-ba24-e8c7712e6ba5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.355 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce namespace which is not needed anymore#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.361 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:39 np0005596060 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 26 13:06:39 np0005596060 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000003.scope: Consumed 4.992s CPU time.
Jan 26 13:06:39 np0005596060 systemd-machined[213879]: Machine qemu-2-instance-00000003 terminated.
Jan 26 13:06:39 np0005596060 NetworkManager[48900]: <info>  [1769450799.4698] manager: (tap8b22a859-a6): new Tun device (/org/freedesktop/NetworkManager/Devices/27)
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.470 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.476 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.490 247428 INFO nova.virt.libvirt.driver [-] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Instance destroyed successfully.#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.491 247428 DEBUG nova.objects.instance [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lazy-loading 'resources' on Instance uuid 4efe084b-d35c-4dbf-b539-1e82b9baf9f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:06:39 np0005596060 neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce[253687]: [NOTICE]   (253691) : haproxy version is 2.8.14-c23fe91
Jan 26 13:06:39 np0005596060 neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce[253687]: [NOTICE]   (253691) : path to executable is /usr/sbin/haproxy
Jan 26 13:06:39 np0005596060 neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce[253687]: [WARNING]  (253691) : Exiting Master process...
Jan 26 13:06:39 np0005596060 neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce[253687]: [WARNING]  (253691) : Exiting Master process...
Jan 26 13:06:39 np0005596060 neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce[253687]: [ALERT]    (253691) : Current worker (253693) exited with code 143 (Terminated)
Jan 26 13:06:39 np0005596060 neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce[253687]: [WARNING]  (253691) : All workers exited. Exiting... (0)
Jan 26 13:06:39 np0005596060 systemd[1]: libpod-5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7.scope: Deactivated successfully.
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.508 247428 DEBUG nova.virt.libvirt.vif [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:06:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-1013927775-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1013927775-3',id=3,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-26T18:06:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0edb4019e89c4674848ec75122984916',ramdisk_id='',reservation_id='r-872mjeag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-1369791216',owner_user_name='tempest-AutoAllocateNetworkTest-1369791216-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:06:35Z,user_data=None,user_id='44d840a696d1433d91d7424baebdfd6b',uuid=4efe084b-d35c-4dbf-b539-1e82b9baf9f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8b22a859-a612-4861-af28-07ae72a5e29c", "address": "fa:16:3e:2d:74:c7", "network": {"id": "0233ae30-2e5a-4e12-9142-37047ec40cce", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb4019e89c4674848ec75122984916", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b22a859-a6", "ovs_interfaceid": "8b22a859-a612-4861-af28-07ae72a5e29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.509 247428 DEBUG nova.network.os_vif_util [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Converting VIF {"id": "8b22a859-a612-4861-af28-07ae72a5e29c", "address": "fa:16:3e:2d:74:c7", "network": {"id": "0233ae30-2e5a-4e12-9142-37047ec40cce", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::24", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0edb4019e89c4674848ec75122984916", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8b22a859-a6", "ovs_interfaceid": "8b22a859-a612-4861-af28-07ae72a5e29c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:06:39 np0005596060 podman[253724]: 2026-01-26 18:06:39.509666491 +0000 UTC m=+0.052764414 container died 5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.509 247428 DEBUG nova.network.os_vif_util [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:74:c7,bridge_name='br-int',has_traffic_filtering=True,id=8b22a859-a612-4861-af28-07ae72a5e29c,network=Network(0233ae30-2e5a-4e12-9142-37047ec40cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b22a859-a6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.510 247428 DEBUG os_vif [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:74:c7,bridge_name='br-int',has_traffic_filtering=True,id=8b22a859-a612-4861-af28-07ae72a5e29c,network=Network(0233ae30-2e5a-4e12-9142-37047ec40cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b22a859-a6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.512 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.513 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b22a859-a6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.514 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.518 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.521 247428 INFO os_vif [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:74:c7,bridge_name='br-int',has_traffic_filtering=True,id=8b22a859-a612-4861-af28-07ae72a5e29c,network=Network(0233ae30-2e5a-4e12-9142-37047ec40cce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8b22a859-a6')#033[00m
Jan 26 13:06:39 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7-userdata-shm.mount: Deactivated successfully.
Jan 26 13:06:39 np0005596060 systemd[1]: var-lib-containers-storage-overlay-dfe85f9f521d7b5c402fbfdee285b971190ce0982b6b8b25c47e1b74852f04e1-merged.mount: Deactivated successfully.
Jan 26 13:06:39 np0005596060 podman[253724]: 2026-01-26 18:06:39.553207516 +0000 UTC m=+0.096305429 container cleanup 5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:06:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:06:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:06:39 np0005596060 systemd[1]: libpod-conmon-5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7.scope: Deactivated successfully.
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.586 247428 DEBUG nova.compute.manager [req-20f1b98c-98cf-4b05-b17c-f14fad307742 req-8adfb278-b35e-4a22-96ed-0ff6dc1cd346 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Received event network-vif-unplugged-8b22a859-a612-4861-af28-07ae72a5e29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.587 247428 DEBUG oslo_concurrency.lockutils [req-20f1b98c-98cf-4b05-b17c-f14fad307742 req-8adfb278-b35e-4a22-96ed-0ff6dc1cd346 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.587 247428 DEBUG oslo_concurrency.lockutils [req-20f1b98c-98cf-4b05-b17c-f14fad307742 req-8adfb278-b35e-4a22-96ed-0ff6dc1cd346 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.587 247428 DEBUG oslo_concurrency.lockutils [req-20f1b98c-98cf-4b05-b17c-f14fad307742 req-8adfb278-b35e-4a22-96ed-0ff6dc1cd346 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.587 247428 DEBUG nova.compute.manager [req-20f1b98c-98cf-4b05-b17c-f14fad307742 req-8adfb278-b35e-4a22-96ed-0ff6dc1cd346 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] No waiting events found dispatching network-vif-unplugged-8b22a859-a612-4861-af28-07ae72a5e29c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.588 247428 DEBUG nova.compute.manager [req-20f1b98c-98cf-4b05-b17c-f14fad307742 req-8adfb278-b35e-4a22-96ed-0ff6dc1cd346 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Received event network-vif-unplugged-8b22a859-a612-4861-af28-07ae72a5e29c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:06:39 np0005596060 podman[253779]: 2026-01-26 18:06:39.621751248 +0000 UTC m=+0.044497129 container remove 5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.628 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2bcff7-e1eb-4635-86b9-1330a7b530bc]: (4, ('Mon Jan 26 06:06:39 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce (5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7)\n5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7\nMon Jan 26 06:06:39 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce (5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7)\n5f61e3885e2d4bd5f7f8ea6598fdf18241ea7dfcf97a004482903956b6ed7ff7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.630 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[bd44b826-10dc-481a-9914-6b31f0038bea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.631 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0233ae30-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:06:39 np0005596060 kernel: tap0233ae30-20: left promiscuous mode
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.633 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.636 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.639 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0c244991-19ef-45d6-ad45-054c56178bd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.648 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.653 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f5cf48ef-af7c-45f7-9c30-90b328562e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.655 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5a6c50-a372-480b-a872-a02cda31b0f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.671 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[038bb3bd-f03c-4335-a8a8-e201ab4ee444]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456249, 'reachable_time': 34897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253797, 'error': None, 'target': 'ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.685 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0233ae30-2e5a-4e12-9142-37047ec40cce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:06:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:39.686 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[1dba7b02-2625-447f-aaac-148df553e9e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:06:39 np0005596060 systemd[1]: run-netns-ovnmeta\x2d0233ae30\x2d2e5a\x2d4e12\x2d9142\x2d37047ec40cce.mount: Deactivated successfully.
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.913 247428 INFO nova.virt.libvirt.driver [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Deleting instance files /var/lib/nova/instances/4efe084b-d35c-4dbf-b539-1e82b9baf9f2_del#033[00m
Jan 26 13:06:39 np0005596060 nova_compute[247421]: 2026-01-26 18:06:39.914 247428 INFO nova.virt.libvirt.driver [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Deletion of /var/lib/nova/instances/4efe084b-d35c-4dbf-b539-1e82b9baf9f2_del complete#033[00m
Jan 26 13:06:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 214 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 5.7 MiB/s rd, 20 KiB/s wr, 221 op/s
Jan 26 13:06:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:40 np0005596060 nova_compute[247421]: 2026-01-26 18:06:40.013 247428 DEBUG nova.virt.libvirt.host [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 26 13:06:40 np0005596060 nova_compute[247421]: 2026-01-26 18:06:40.013 247428 INFO nova.virt.libvirt.host [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] UEFI support detected#033[00m
Jan 26 13:06:40 np0005596060 nova_compute[247421]: 2026-01-26 18:06:40.015 247428 INFO nova.compute.manager [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:06:40 np0005596060 nova_compute[247421]: 2026-01-26 18:06:40.015 247428 DEBUG oslo.service.loopingcall [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:06:40 np0005596060 nova_compute[247421]: 2026-01-26 18:06:40.015 247428 DEBUG nova.compute.manager [-] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:06:40 np0005596060 nova_compute[247421]: 2026-01-26 18:06:40.015 247428 DEBUG nova.network.neutron [-] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/188746220' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/188746220' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:06:40 np0005596060 nova_compute[247421]: 2026-01-26 18:06:40.521 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:06:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 0eeda403-ccc8-4cd6-aaf8-26ebe1c3172d does not exist
Jan 26 13:06:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 94086962-349b-4a91-ba9d-6f35060c5c45 does not exist
Jan 26 13:06:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ceec2818-14d2-4f71-b7bb-4ff726e4b77e does not exist
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:06:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.281 247428 DEBUG nova.network.neutron [-] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.304 247428 INFO nova.compute.manager [-] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Took 1.29 seconds to deallocate network for instance.#033[00m
Jan 26 13:06:41 np0005596060 podman[254071]: 2026-01-26 18:06:41.305670859 +0000 UTC m=+0.088780484 container create 0084e33ae549f00a6013b1bde3b7fa60f70fcb4ccb91dd840b244dff7ff08e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 13:06:41 np0005596060 podman[254071]: 2026-01-26 18:06:41.249766348 +0000 UTC m=+0.032875993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.351 247428 DEBUG oslo_concurrency.lockutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.352 247428 DEBUG oslo_concurrency.lockutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.422 247428 DEBUG oslo_concurrency.processutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:41 np0005596060 systemd[1]: Started libpod-conmon-0084e33ae549f00a6013b1bde3b7fa60f70fcb4ccb91dd840b244dff7ff08e11.scope.
Jan 26 13:06:41 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.482 247428 DEBUG nova.compute.manager [req-4c9a6a6f-d265-4ef3-99de-6fd49933c84f req-db22ebfc-56a0-4fcf-887b-db10ec581e9f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Received event network-vif-deleted-8b22a859-a612-4861-af28-07ae72a5e29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:06:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:41.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:41 np0005596060 podman[254071]: 2026-01-26 18:06:41.579728956 +0000 UTC m=+0.362838581 container init 0084e33ae549f00a6013b1bde3b7fa60f70fcb4ccb91dd840b244dff7ff08e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhaskara, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 13:06:41 np0005596060 podman[254071]: 2026-01-26 18:06:41.594311626 +0000 UTC m=+0.377421251 container start 0084e33ae549f00a6013b1bde3b7fa60f70fcb4ccb91dd840b244dff7ff08e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhaskara, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:06:41 np0005596060 great_bhaskara[254086]: 167 167
Jan 26 13:06:41 np0005596060 systemd[1]: libpod-0084e33ae549f00a6013b1bde3b7fa60f70fcb4ccb91dd840b244dff7ff08e11.scope: Deactivated successfully.
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.658 247428 DEBUG nova.compute.manager [req-71d635b6-b2ef-46f6-b14b-fda9430e4fcb req-c8a592c7-c039-4761-b3d7-711e8f00eaf7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Received event network-vif-plugged-8b22a859-a612-4861-af28-07ae72a5e29c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.659 247428 DEBUG oslo_concurrency.lockutils [req-71d635b6-b2ef-46f6-b14b-fda9430e4fcb req-c8a592c7-c039-4761-b3d7-711e8f00eaf7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.659 247428 DEBUG oslo_concurrency.lockutils [req-71d635b6-b2ef-46f6-b14b-fda9430e4fcb req-c8a592c7-c039-4761-b3d7-711e8f00eaf7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.659 247428 DEBUG oslo_concurrency.lockutils [req-71d635b6-b2ef-46f6-b14b-fda9430e4fcb req-c8a592c7-c039-4761-b3d7-711e8f00eaf7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.660 247428 DEBUG nova.compute.manager [req-71d635b6-b2ef-46f6-b14b-fda9430e4fcb req-c8a592c7-c039-4761-b3d7-711e8f00eaf7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] No waiting events found dispatching network-vif-plugged-8b22a859-a612-4861-af28-07ae72a5e29c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.660 247428 WARNING nova.compute.manager [req-71d635b6-b2ef-46f6-b14b-fda9430e4fcb req-c8a592c7-c039-4761-b3d7-711e8f00eaf7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Received unexpected event network-vif-plugged-8b22a859-a612-4861-af28-07ae72a5e29c for instance with vm_state deleted and task_state None.#033[00m
Jan 26 13:06:41 np0005596060 podman[254071]: 2026-01-26 18:06:41.768105487 +0000 UTC m=+0.551215162 container attach 0084e33ae549f00a6013b1bde3b7fa60f70fcb4ccb91dd840b244dff7ff08e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:06:41 np0005596060 podman[254071]: 2026-01-26 18:06:41.768683212 +0000 UTC m=+0.551792887 container died 0084e33ae549f00a6013b1bde3b7fa60f70fcb4ccb91dd840b244dff7ff08e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 26 13:06:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:06:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492036755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.871 247428 DEBUG oslo_concurrency.processutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.883 247428 DEBUG nova.compute.provider_tree [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.907 247428 DEBUG nova.scheduler.client.report [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:06:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 183 MiB data, 312 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 19 KiB/s wr, 196 op/s
Jan 26 13:06:41 np0005596060 nova_compute[247421]: 2026-01-26 18:06:41.933 247428 DEBUG oslo_concurrency.lockutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:41 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0a9eafd4b219063f33ec8834c53fba850230a5142191e942ce36a0fa9a232aba-merged.mount: Deactivated successfully.
Jan 26 13:06:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:41.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:42 np0005596060 nova_compute[247421]: 2026-01-26 18:06:42.003 247428 INFO nova.scheduler.client.report [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Deleted allocations for instance 4efe084b-d35c-4dbf-b539-1e82b9baf9f2#033[00m
Jan 26 13:06:42 np0005596060 nova_compute[247421]: 2026-01-26 18:06:42.158 247428 DEBUG oslo_concurrency.lockutils [None req-3e1bc496-43b8-4f90-9c33-3801650bf8b3 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "4efe084b-d35c-4dbf-b539-1e82b9baf9f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:42 np0005596060 podman[254071]: 2026-01-26 18:06:42.208161384 +0000 UTC m=+0.991271029 container remove 0084e33ae549f00a6013b1bde3b7fa60f70fcb4ccb91dd840b244dff7ff08e11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 13:06:42 np0005596060 systemd[1]: libpod-conmon-0084e33ae549f00a6013b1bde3b7fa60f70fcb4ccb91dd840b244dff7ff08e11.scope: Deactivated successfully.
Jan 26 13:06:42 np0005596060 podman[254137]: 2026-01-26 18:06:42.361786017 +0000 UTC m=+0.027620263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:06:42 np0005596060 podman[254137]: 2026-01-26 18:06:42.468288107 +0000 UTC m=+0.134122333 container create 7898b655fe0b6f70c6aefd090fd3008203d38bb411a6544ff296a37fa6558e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 13:06:42 np0005596060 systemd[1]: Started libpod-conmon-7898b655fe0b6f70c6aefd090fd3008203d38bb411a6544ff296a37fa6558e41.scope.
Jan 26 13:06:42 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:06:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e138fa865cd8be3d4bb4cef75f0efe493a6235088705a78fc572e751347b80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e138fa865cd8be3d4bb4cef75f0efe493a6235088705a78fc572e751347b80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e138fa865cd8be3d4bb4cef75f0efe493a6235088705a78fc572e751347b80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e138fa865cd8be3d4bb4cef75f0efe493a6235088705a78fc572e751347b80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51e138fa865cd8be3d4bb4cef75f0efe493a6235088705a78fc572e751347b80/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:42 np0005596060 podman[254137]: 2026-01-26 18:06:42.862302556 +0000 UTC m=+0.528136782 container init 7898b655fe0b6f70c6aefd090fd3008203d38bb411a6544ff296a37fa6558e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:06:42 np0005596060 podman[254137]: 2026-01-26 18:06:42.87380266 +0000 UTC m=+0.539636876 container start 7898b655fe0b6f70c6aefd090fd3008203d38bb411a6544ff296a37fa6558e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:06:42 np0005596060 podman[254137]: 2026-01-26 18:06:42.876872186 +0000 UTC m=+0.542706412 container attach 7898b655fe0b6f70c6aefd090fd3008203d38bb411a6544ff296a37fa6558e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 13:06:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:43.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:43 np0005596060 epic_snyder[254154]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:06:43 np0005596060 epic_snyder[254154]: --> relative data size: 1.0
Jan 26 13:06:43 np0005596060 epic_snyder[254154]: --> All data devices are unavailable
Jan 26 13:06:43 np0005596060 systemd[1]: libpod-7898b655fe0b6f70c6aefd090fd3008203d38bb411a6544ff296a37fa6558e41.scope: Deactivated successfully.
Jan 26 13:06:43 np0005596060 podman[254137]: 2026-01-26 18:06:43.673726592 +0000 UTC m=+1.339560888 container died 7898b655fe0b6f70c6aefd090fd3008203d38bb411a6544ff296a37fa6558e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:06:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 18 KiB/s wr, 229 op/s
Jan 26 13:06:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:43.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:06:44
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'vms', 'images', 'backups', 'default.rgw.meta']
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:06:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:06:44 np0005596060 systemd[1]: var-lib-containers-storage-overlay-51e138fa865cd8be3d4bb4cef75f0efe493a6235088705a78fc572e751347b80-merged.mount: Deactivated successfully.
Jan 26 13:06:44 np0005596060 nova_compute[247421]: 2026-01-26 18:06:44.516 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:44 np0005596060 podman[254137]: 2026-01-26 18:06:44.664768754 +0000 UTC m=+2.330603030 container remove 7898b655fe0b6f70c6aefd090fd3008203d38bb411a6544ff296a37fa6558e41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_snyder, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:06:44 np0005596060 systemd[1]: libpod-conmon-7898b655fe0b6f70c6aefd090fd3008203d38bb411a6544ff296a37fa6558e41.scope: Deactivated successfully.
Jan 26 13:06:45 np0005596060 podman[254322]: 2026-01-26 18:06:45.346041756 +0000 UTC m=+0.046180041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:06:45 np0005596060 nova_compute[247421]: 2026-01-26 18:06:45.523 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:45.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:45 np0005596060 podman[254322]: 2026-01-26 18:06:45.720052441 +0000 UTC m=+0.420190746 container create b926334d24093ec95b7c78844a7f27942cfe809eb32d9a6a23e9bb5ade311f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gould, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:06:45 np0005596060 systemd[1]: Started libpod-conmon-b926334d24093ec95b7c78844a7f27942cfe809eb32d9a6a23e9bb5ade311f86.scope.
Jan 26 13:06:45 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:06:45 np0005596060 podman[254322]: 2026-01-26 18:06:45.901857101 +0000 UTC m=+0.601995806 container init b926334d24093ec95b7c78844a7f27942cfe809eb32d9a6a23e9bb5ade311f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gould, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:06:45 np0005596060 podman[254322]: 2026-01-26 18:06:45.91601259 +0000 UTC m=+0.616150855 container start b926334d24093ec95b7c78844a7f27942cfe809eb32d9a6a23e9bb5ade311f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gould, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:06:45 np0005596060 infallible_gould[254372]: 167 167
Jan 26 13:06:45 np0005596060 systemd[1]: libpod-b926334d24093ec95b7c78844a7f27942cfe809eb32d9a6a23e9bb5ade311f86.scope: Deactivated successfully.
Jan 26 13:06:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 18 KiB/s wr, 229 op/s
Jan 26 13:06:45 np0005596060 podman[254322]: 2026-01-26 18:06:45.985957327 +0000 UTC m=+0.686095622 container attach b926334d24093ec95b7c78844a7f27942cfe809eb32d9a6a23e9bb5ade311f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gould, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:06:45 np0005596060 podman[254322]: 2026-01-26 18:06:45.986569052 +0000 UTC m=+0.686707357 container died b926334d24093ec95b7c78844a7f27942cfe809eb32d9a6a23e9bb5ade311f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 26 13:06:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:46.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:46 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0481c89eaf1adc2570396ca8aa5edf6102de58a03233385641fe00c8a1b69eaa-merged.mount: Deactivated successfully.
Jan 26 13:06:46 np0005596060 podman[254322]: 2026-01-26 18:06:46.108607026 +0000 UTC m=+0.808745301 container remove b926334d24093ec95b7c78844a7f27942cfe809eb32d9a6a23e9bb5ade311f86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_gould, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 13:06:46 np0005596060 systemd[1]: libpod-conmon-b926334d24093ec95b7c78844a7f27942cfe809eb32d9a6a23e9bb5ade311f86.scope: Deactivated successfully.
Jan 26 13:06:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:06:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4059295991' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:06:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:06:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4059295991' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:06:46 np0005596060 podman[254415]: 2026-01-26 18:06:46.279437394 +0000 UTC m=+0.042410628 container create f02d367313a3f44cc99c78a1b3a55c431fca7b40557d588735db9f4696a37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 13:06:46 np0005596060 systemd[1]: Started libpod-conmon-f02d367313a3f44cc99c78a1b3a55c431fca7b40557d588735db9f4696a37c7a.scope.
Jan 26 13:06:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:06:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631f898235ed531f05728b7c549bbc7e4e8b5850898c21e4aca9468eda38f614/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631f898235ed531f05728b7c549bbc7e4e8b5850898c21e4aca9468eda38f614/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631f898235ed531f05728b7c549bbc7e4e8b5850898c21e4aca9468eda38f614/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/631f898235ed531f05728b7c549bbc7e4e8b5850898c21e4aca9468eda38f614/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:46 np0005596060 podman[254415]: 2026-01-26 18:06:46.260527487 +0000 UTC m=+0.023500741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:06:46 np0005596060 podman[254415]: 2026-01-26 18:06:46.371959459 +0000 UTC m=+0.134932723 container init f02d367313a3f44cc99c78a1b3a55c431fca7b40557d588735db9f4696a37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:06:46 np0005596060 podman[254415]: 2026-01-26 18:06:46.386016666 +0000 UTC m=+0.148989900 container start f02d367313a3f44cc99c78a1b3a55c431fca7b40557d588735db9f4696a37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:06:46 np0005596060 podman[254415]: 2026-01-26 18:06:46.389627205 +0000 UTC m=+0.152600489 container attach f02d367313a3f44cc99c78a1b3a55c431fca7b40557d588735db9f4696a37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:06:47 np0005596060 musing_wilson[254431]: {
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:    "1": [
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:        {
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "devices": [
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "/dev/loop3"
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            ],
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "lv_name": "ceph_lv0",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "lv_size": "7511998464",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "name": "ceph_lv0",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "tags": {
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.cluster_name": "ceph",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.crush_device_class": "",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.encrypted": "0",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.osd_id": "1",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.type": "block",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:                "ceph.vdo": "0"
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            },
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "type": "block",
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:            "vg_name": "ceph_vg0"
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:        }
Jan 26 13:06:47 np0005596060 musing_wilson[254431]:    ]
Jan 26 13:06:47 np0005596060 musing_wilson[254431]: }
Jan 26 13:06:47 np0005596060 systemd[1]: libpod-f02d367313a3f44cc99c78a1b3a55c431fca7b40557d588735db9f4696a37c7a.scope: Deactivated successfully.
Jan 26 13:06:47 np0005596060 podman[254415]: 2026-01-26 18:06:47.160570591 +0000 UTC m=+0.923543845 container died f02d367313a3f44cc99c78a1b3a55c431fca7b40557d588735db9f4696a37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:06:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-631f898235ed531f05728b7c549bbc7e4e8b5850898c21e4aca9468eda38f614-merged.mount: Deactivated successfully.
Jan 26 13:06:47 np0005596060 podman[254415]: 2026-01-26 18:06:47.230792535 +0000 UTC m=+0.993765789 container remove f02d367313a3f44cc99c78a1b3a55c431fca7b40557d588735db9f4696a37c7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_wilson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 13:06:47 np0005596060 systemd[1]: libpod-conmon-f02d367313a3f44cc99c78a1b3a55c431fca7b40557d588735db9f4696a37c7a.scope: Deactivated successfully.
Jan 26 13:06:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:47.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:47 np0005596060 podman[254595]: 2026-01-26 18:06:47.873474274 +0000 UTC m=+0.039235690 container create 5af1f8662bf05767d72c8a8076f3e6738bb1e6bd5c0107681e3d102d8d629183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 13:06:47 np0005596060 systemd[1]: Started libpod-conmon-5af1f8662bf05767d72c8a8076f3e6738bb1e6bd5c0107681e3d102d8d629183.scope.
Jan 26 13:06:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 21 GiB / 21 GiB avail; 72 KiB/s rd, 15 KiB/s wr, 81 op/s
Jan 26 13:06:47 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:06:47 np0005596060 podman[254595]: 2026-01-26 18:06:47.855525321 +0000 UTC m=+0.021286747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:06:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:48.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:48 np0005596060 podman[254595]: 2026-01-26 18:06:48.19160764 +0000 UTC m=+0.357369056 container init 5af1f8662bf05767d72c8a8076f3e6738bb1e6bd5c0107681e3d102d8d629183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:06:48 np0005596060 podman[254595]: 2026-01-26 18:06:48.20378707 +0000 UTC m=+0.369548486 container start 5af1f8662bf05767d72c8a8076f3e6738bb1e6bd5c0107681e3d102d8d629183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:06:48 np0005596060 podman[254595]: 2026-01-26 18:06:48.20741807 +0000 UTC m=+0.373179486 container attach 5af1f8662bf05767d72c8a8076f3e6738bb1e6bd5c0107681e3d102d8d629183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dijkstra, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:06:48 np0005596060 nostalgic_dijkstra[254611]: 167 167
Jan 26 13:06:48 np0005596060 systemd[1]: libpod-5af1f8662bf05767d72c8a8076f3e6738bb1e6bd5c0107681e3d102d8d629183.scope: Deactivated successfully.
Jan 26 13:06:48 np0005596060 podman[254595]: 2026-01-26 18:06:48.21227411 +0000 UTC m=+0.378035526 container died 5af1f8662bf05767d72c8a8076f3e6738bb1e6bd5c0107681e3d102d8d629183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 13:06:48 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4d8b35fbd391ca4cde7aa948743954f2d2d3f2b8da7353a3b8d5603021b6be87-merged.mount: Deactivated successfully.
Jan 26 13:06:48 np0005596060 podman[254595]: 2026-01-26 18:06:48.253676222 +0000 UTC m=+0.419437638 container remove 5af1f8662bf05767d72c8a8076f3e6738bb1e6bd5c0107681e3d102d8d629183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 13:06:48 np0005596060 systemd[1]: libpod-conmon-5af1f8662bf05767d72c8a8076f3e6738bb1e6bd5c0107681e3d102d8d629183.scope: Deactivated successfully.
Jan 26 13:06:48 np0005596060 podman[254635]: 2026-01-26 18:06:48.471119172 +0000 UTC m=+0.062565496 container create f52a773ccc6466b66f6ade4918e00e924e0166a443ccad228350a268d9139071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_zhukovsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:06:48 np0005596060 systemd[1]: Started libpod-conmon-f52a773ccc6466b66f6ade4918e00e924e0166a443ccad228350a268d9139071.scope.
Jan 26 13:06:48 np0005596060 podman[254635]: 2026-01-26 18:06:48.444138495 +0000 UTC m=+0.035584859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:06:48 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:06:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee1bd8a887f896ab5d00c9cdd6bfc7f842ac004c4047211295b6c4123519785/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee1bd8a887f896ab5d00c9cdd6bfc7f842ac004c4047211295b6c4123519785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee1bd8a887f896ab5d00c9cdd6bfc7f842ac004c4047211295b6c4123519785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:48 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee1bd8a887f896ab5d00c9cdd6bfc7f842ac004c4047211295b6c4123519785/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:06:48 np0005596060 podman[254635]: 2026-01-26 18:06:48.58037889 +0000 UTC m=+0.171825224 container init f52a773ccc6466b66f6ade4918e00e924e0166a443ccad228350a268d9139071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_zhukovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 13:06:48 np0005596060 podman[254635]: 2026-01-26 18:06:48.589841493 +0000 UTC m=+0.181287807 container start f52a773ccc6466b66f6ade4918e00e924e0166a443ccad228350a268d9139071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:06:48 np0005596060 podman[254635]: 2026-01-26 18:06:48.606420163 +0000 UTC m=+0.197866477 container attach f52a773ccc6466b66f6ade4918e00e924e0166a443ccad228350a268d9139071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:06:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:48 np0005596060 nova_compute[247421]: 2026-01-26 18:06:48.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:48 np0005596060 nova_compute[247421]: 2026-01-26 18:06:48.655 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 26 13:06:48 np0005596060 nova_compute[247421]: 2026-01-26 18:06:48.680 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 26 13:06:48 np0005596060 nova_compute[247421]: 2026-01-26 18:06:48.680 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:48 np0005596060 nova_compute[247421]: 2026-01-26 18:06:48.681 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 26 13:06:48 np0005596060 nova_compute[247421]: 2026-01-26 18:06:48.695 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:49 np0005596060 practical_zhukovsky[254651]: {
Jan 26 13:06:49 np0005596060 practical_zhukovsky[254651]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:06:49 np0005596060 practical_zhukovsky[254651]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:06:49 np0005596060 practical_zhukovsky[254651]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:06:49 np0005596060 practical_zhukovsky[254651]:        "osd_id": 1,
Jan 26 13:06:49 np0005596060 practical_zhukovsky[254651]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:06:49 np0005596060 practical_zhukovsky[254651]:        "type": "bluestore"
Jan 26 13:06:49 np0005596060 practical_zhukovsky[254651]:    }
Jan 26 13:06:49 np0005596060 practical_zhukovsky[254651]: }
Jan 26 13:06:49 np0005596060 systemd[1]: libpod-f52a773ccc6466b66f6ade4918e00e924e0166a443ccad228350a268d9139071.scope: Deactivated successfully.
Jan 26 13:06:49 np0005596060 podman[254635]: 2026-01-26 18:06:49.416606788 +0000 UTC m=+1.008053102 container died f52a773ccc6466b66f6ade4918e00e924e0166a443ccad228350a268d9139071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_zhukovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:06:49 np0005596060 nova_compute[247421]: 2026-01-26 18:06:49.519 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:49 np0005596060 nova_compute[247421]: 2026-01-26 18:06:49.537 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:49.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4ee1bd8a887f896ab5d00c9cdd6bfc7f842ac004c4047211295b6c4123519785-merged.mount: Deactivated successfully.
Jan 26 13:06:49 np0005596060 nova_compute[247421]: 2026-01-26 18:06:49.707 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:49 np0005596060 podman[254635]: 2026-01-26 18:06:49.746281809 +0000 UTC m=+1.337728123 container remove f52a773ccc6466b66f6ade4918e00e924e0166a443ccad228350a268d9139071 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:06:49 np0005596060 systemd[1]: libpod-conmon-f52a773ccc6466b66f6ade4918e00e924e0166a443ccad228350a268d9139071.scope: Deactivated successfully.
Jan 26 13:06:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:06:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:06:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:06:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:06:49 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 0cccc1b0-ad08-43f8-b6d3-e528f475ed2a does not exist
Jan 26 13:06:49 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 76568d86-45ff-4c7b-a4f8-2dadd89169ed does not exist
Jan 26 13:06:49 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev fda69c32-797e-4267-a7df-7e9631c93266 does not exist
Jan 26 13:06:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 121 MiB data, 280 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 13 KiB/s wr, 72 op/s
Jan 26 13:06:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:50.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:50 np0005596060 nova_compute[247421]: 2026-01-26 18:06:50.525 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:51 np0005596060 nova_compute[247421]: 2026-01-26 18:06:51.083 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:51.085 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:06:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:51.086 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:06:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:06:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:06:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:06:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:51.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:06:51 np0005596060 nova_compute[247421]: 2026-01-26 18:06:51.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:51 np0005596060 nova_compute[247421]: 2026-01-26 18:06:51.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:06:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 140 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 60 KiB/s rd, 439 KiB/s wr, 69 op/s
Jan 26 13:06:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:52.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.033 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "3110b92c-0f4b-4f03-8991-a8106cdbe99d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.034 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "3110b92c-0f4b-4f03-8991-a8106cdbe99d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.034 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "3110b92c-0f4b-4f03-8991-a8106cdbe99d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.035 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "3110b92c-0f4b-4f03-8991-a8106cdbe99d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.035 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "3110b92c-0f4b-4f03-8991-a8106cdbe99d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.037 247428 INFO nova.compute.manager [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Terminating instance#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.038 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "refresh_cache-3110b92c-0f4b-4f03-8991-a8106cdbe99d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.038 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquired lock "refresh_cache-3110b92c-0f4b-4f03-8991-a8106cdbe99d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.038 247428 DEBUG nova.network.neutron [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.675 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.676 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.676 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.676 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.676 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:52 np0005596060 nova_compute[247421]: 2026-01-26 18:06:52.961 247428 DEBUG nova.network.neutron [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:06:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:06:53 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/372038196' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.126 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.220 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.221 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.413 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.415 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4674MB free_disk=20.937911987304688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.415 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.415 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.439 247428 DEBUG nova.network.neutron [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.468 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Releasing lock "refresh_cache-3110b92c-0f4b-4f03-8991-a8106cdbe99d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.468 247428 DEBUG nova.compute.manager [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.490 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 3110b92c-0f4b-4f03-8991-a8106cdbe99d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.490 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.491 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:06:53 np0005596060 nova_compute[247421]: 2026-01-26 18:06:53.574 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:06:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:53.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 58 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 26 13:06:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:54.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:06:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1949096769' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.131 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.138 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.167 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.199 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.200 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.488 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769450799.4881089, 4efe084b-d35c-4dbf-b539-1e82b9baf9f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.489 247428 INFO nova.compute.manager [-] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.506 247428 DEBUG nova.compute.manager [None req-e49a2cb4-0c49-4b15-b9fc-2018c47bd428 - - - - - -] [instance: 4efe084b-d35c-4dbf-b539-1e82b9baf9f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:06:54 np0005596060 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 26 13:06:54 np0005596060 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 14.705s CPU time.
Jan 26 13:06:54 np0005596060 systemd-machined[213879]: Machine qemu-1-instance-00000001 terminated.
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.523 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.695 247428 INFO nova.virt.libvirt.driver [-] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Instance destroyed successfully.#033[00m
Jan 26 13:06:54 np0005596060 nova_compute[247421]: 2026-01-26 18:06:54.695 247428 DEBUG nova.objects.instance [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lazy-loading 'resources' on Instance uuid 3110b92c-0f4b-4f03-8991-a8106cdbe99d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:06:55 np0005596060 nova_compute[247421]: 2026-01-26 18:06:55.200 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:55 np0005596060 nova_compute[247421]: 2026-01-26 18:06:55.201 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:06:55 np0005596060 nova_compute[247421]: 2026-01-26 18:06:55.202 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:06:55 np0005596060 nova_compute[247421]: 2026-01-26 18:06:55.226 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 26 13:06:55 np0005596060 nova_compute[247421]: 2026-01-26 18:06:55.227 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:06:55 np0005596060 nova_compute[247421]: 2026-01-26 18:06:55.228 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:55 np0005596060 nova_compute[247421]: 2026-01-26 18:06:55.228 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:55 np0005596060 nova_compute[247421]: 2026-01-26 18:06:55.229 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:55 np0005596060 nova_compute[247421]: 2026-01-26 18:06:55.229 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:06:55 np0005596060 nova_compute[247421]: 2026-01-26 18:06:55.527 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:06:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:55.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:06:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 26 13:06:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:06:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:56.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:06:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:57.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 21 GiB / 21 GiB avail; 590 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 26 13:06:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:06:58.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:06:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:06:59.089 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:06:59 np0005596060 nova_compute[247421]: 2026-01-26 18:06:59.281 247428 INFO nova.virt.libvirt.driver [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Deleting instance files /var/lib/nova/instances/3110b92c-0f4b-4f03-8991-a8106cdbe99d_del#033[00m
Jan 26 13:06:59 np0005596060 nova_compute[247421]: 2026-01-26 18:06:59.282 247428 INFO nova.virt.libvirt.driver [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Deletion of /var/lib/nova/instances/3110b92c-0f4b-4f03-8991-a8106cdbe99d_del complete#033[00m
Jan 26 13:06:59 np0005596060 nova_compute[247421]: 2026-01-26 18:06:59.332 247428 INFO nova.compute.manager [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Took 5.86 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:06:59 np0005596060 nova_compute[247421]: 2026-01-26 18:06:59.333 247428 DEBUG oslo.service.loopingcall [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:06:59 np0005596060 nova_compute[247421]: 2026-01-26 18:06:59.333 247428 DEBUG nova.compute.manager [-] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:06:59 np0005596060 nova_compute[247421]: 2026-01-26 18:06:59.333 247428 DEBUG nova.network.neutron [-] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:06:59 np0005596060 nova_compute[247421]: 2026-01-26 18:06:59.528 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:06:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:06:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:06:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:06:59.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:06:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 88 MiB data, 265 MiB used, 21 GiB / 21 GiB avail; 580 KiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 26 13:06:59 np0005596060 nova_compute[247421]: 2026-01-26 18:06:59.947 247428 DEBUG nova.network.neutron [-] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:06:59 np0005596060 nova_compute[247421]: 2026-01-26 18:06:59.975 247428 DEBUG nova.network.neutron [-] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:07:00 np0005596060 nova_compute[247421]: 2026-01-26 18:07:00.005 247428 INFO nova.compute.manager [-] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Took 0.67 seconds to deallocate network for instance.#033[00m
Jan 26 13:07:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:00.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:00 np0005596060 nova_compute[247421]: 2026-01-26 18:07:00.060 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:00 np0005596060 nova_compute[247421]: 2026-01-26 18:07:00.061 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:00 np0005596060 nova_compute[247421]: 2026-01-26 18:07:00.195 247428 DEBUG oslo_concurrency.processutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:07:00 np0005596060 nova_compute[247421]: 2026-01-26 18:07:00.531 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:07:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581801526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:07:00 np0005596060 nova_compute[247421]: 2026-01-26 18:07:00.724 247428 DEBUG oslo_concurrency.processutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:07:00 np0005596060 nova_compute[247421]: 2026-01-26 18:07:00.732 247428 DEBUG nova.compute.provider_tree [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:07:00 np0005596060 nova_compute[247421]: 2026-01-26 18:07:00.850 247428 DEBUG nova.scheduler.client.report [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:07:00 np0005596060 nova_compute[247421]: 2026-01-26 18:07:00.875 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:01 np0005596060 nova_compute[247421]: 2026-01-26 18:07:01.020 247428 INFO nova.scheduler.client.report [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Deleted allocations for instance 3110b92c-0f4b-4f03-8991-a8106cdbe99d#033[00m
Jan 26 13:07:01 np0005596060 nova_compute[247421]: 2026-01-26 18:07:01.114 247428 DEBUG oslo_concurrency.lockutils [None req-33bebd99-cdf2-4e30-a75e-e6e73f62a536 44d840a696d1433d91d7424baebdfd6b 0edb4019e89c4674848ec75122984916 - - default default] Lock "3110b92c-0f4b-4f03-8991-a8106cdbe99d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 9.080s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:01.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 68 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 585 KiB/s rd, 1.8 MiB/s wr, 81 op/s
Jan 26 13:07:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:07:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:02.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:07:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:03.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007628201772752652 of space, bias 1.0, pg target 0.22884605318257956 quantized to 32 (current 32)
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:07:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:03 np0005596060 podman[254831]: 2026-01-26 18:07:03.82664997 +0000 UTC m=+0.078497239 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 13:07:03 np0005596060 podman[254832]: 2026-01-26 18:07:03.892134647 +0000 UTC m=+0.146563660 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:07:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 21 GiB / 21 GiB avail; 605 KiB/s rd, 1.4 MiB/s wr, 108 op/s
Jan 26 13:07:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:04.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:04 np0005596060 nova_compute[247421]: 2026-01-26 18:07:04.530 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:05 np0005596060 nova_compute[247421]: 2026-01-26 18:07:05.532 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:05.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 239 MiB used, 21 GiB / 21 GiB avail; 589 KiB/s rd, 15 KiB/s wr, 82 op/s
Jan 26 13:07:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:06.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:07.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 21 GiB / 21 GiB avail; 589 KiB/s rd, 15 KiB/s wr, 82 op/s
Jan 26 13:07:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:08.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:09 np0005596060 nova_compute[247421]: 2026-01-26 18:07:09.535 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:09.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:09 np0005596060 nova_compute[247421]: 2026-01-26 18:07:09.693 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769450814.6915724, 3110b92c-0f4b-4f03-8991-a8106cdbe99d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:07:09 np0005596060 nova_compute[247421]: 2026-01-26 18:07:09.694 247428 INFO nova.compute.manager [-] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:07:09 np0005596060 nova_compute[247421]: 2026-01-26 18:07:09.715 247428 DEBUG nova.compute.manager [None req-46c9f504-673d-4e74-8206-258536847091 - - - - - -] [instance: 3110b92c-0f4b-4f03-8991-a8106cdbe99d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:07:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Jan 26 13:07:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:10.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:10 np0005596060 nova_compute[247421]: 2026-01-26 18:07:10.535 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:11.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 1.7 KiB/s wr, 36 op/s
Jan 26 13:07:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:12.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:13.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 682 B/s wr, 28 op/s
Jan 26 13:07:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:14.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:07:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:07:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:07:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:07:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:07:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:07:14 np0005596060 nova_compute[247421]: 2026-01-26 18:07:14.539 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:14.739 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:14.740 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:14.740 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:15 np0005596060 nova_compute[247421]: 2026-01-26 18:07:15.538 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:15.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:07:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:17.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:07:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:07:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:18.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:07:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:19 np0005596060 nova_compute[247421]: 2026-01-26 18:07:19.585 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:19.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:07:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:20.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:20 np0005596060 nova_compute[247421]: 2026-01-26 18:07:20.540 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:21.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:07:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:22.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:23 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:23Z|00035|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 26 13:07:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:23.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:07:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:24.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:24 np0005596060 nova_compute[247421]: 2026-01-26 18:07:24.590 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:25 np0005596060 nova_compute[247421]: 2026-01-26 18:07:25.543 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 41 MiB data, 234 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:07:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:26.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:07:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:27.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:07:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 82 MiB data, 247 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Jan 26 13:07:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:28.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:29 np0005596060 nova_compute[247421]: 2026-01-26 18:07:29.593 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:29.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 82 MiB data, 247 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.7 MiB/s wr, 25 op/s
Jan 26 13:07:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:30.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:30 np0005596060 nova_compute[247421]: 2026-01-26 18:07:30.544 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:31.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 88 MiB data, 248 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:07:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:32.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:33.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:07:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:34.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:34 np0005596060 nova_compute[247421]: 2026-01-26 18:07:34.597 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:34 np0005596060 podman[254994]: 2026-01-26 18:07:34.848585129 +0000 UTC m=+0.105720082 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 13:07:34 np0005596060 podman[254993]: 2026-01-26 18:07:34.853448689 +0000 UTC m=+0.105121677 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 13:07:35 np0005596060 nova_compute[247421]: 2026-01-26 18:07:35.545 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:35.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:07:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:36.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:37.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 26 13:07:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:38.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:39 np0005596060 nova_compute[247421]: 2026-01-26 18:07:39.601 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:39.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 1021 KiB/s rd, 128 KiB/s wr, 43 op/s
Jan 26 13:07:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:40.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.148 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Acquiring lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.149 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.165 247428 DEBUG nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.264 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.265 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.279 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.279 247428 INFO nova.compute.claims [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.446 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.548 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:07:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/928330344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.904 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.910 247428 DEBUG nova.compute.provider_tree [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.934 247428 DEBUG nova.scheduler.client.report [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.984 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:40 np0005596060 nova_compute[247421]: 2026-01-26 18:07:40.985 247428 DEBUG nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.034 247428 DEBUG nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.035 247428 DEBUG nova.network.neutron [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.064 247428 INFO nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.109 247428 DEBUG nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.356 247428 DEBUG nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.358 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.359 247428 INFO nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Creating image(s)#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.394 247428 DEBUG nova.storage.rbd_utils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] rbd image 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.425 247428 DEBUG nova.storage.rbd_utils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] rbd image 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.453 247428 DEBUG nova.storage.rbd_utils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] rbd image 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.457 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.513 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.514 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.515 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.516 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.541 247428 DEBUG nova.storage.rbd_utils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] rbd image 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.544 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:07:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:41.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.713 247428 DEBUG nova.policy [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4824e9871dbc4b4c84dffadc67ceb442', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4a6a7c4658204ff7b58cbc0fec17a157', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.819 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:07:41 np0005596060 nova_compute[247421]: 2026-01-26 18:07:41.897 247428 DEBUG nova.storage.rbd_utils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] resizing rbd image 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:07:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 128 KiB/s wr, 47 op/s
Jan 26 13:07:42 np0005596060 nova_compute[247421]: 2026-01-26 18:07:42.021 247428 DEBUG nova.objects.instance [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lazy-loading 'migration_context' on Instance uuid 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:07:42 np0005596060 nova_compute[247421]: 2026-01-26 18:07:42.041 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:07:42 np0005596060 nova_compute[247421]: 2026-01-26 18:07:42.042 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Ensure instance console log exists: /var/lib/nova/instances/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:07:42 np0005596060 nova_compute[247421]: 2026-01-26 18:07:42.042 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:42 np0005596060 nova_compute[247421]: 2026-01-26 18:07:42.043 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:42 np0005596060 nova_compute[247421]: 2026-01-26 18:07:42.044 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:42.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:43 np0005596060 nova_compute[247421]: 2026-01-26 18:07:43.549 247428 DEBUG nova.virt.libvirt.driver [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Creating tmpfile /var/lib/nova/instances/tmpdzsz1mp9 to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Jan 26 13:07:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:43.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:43 np0005596060 nova_compute[247421]: 2026-01-26 18:07:43.718 247428 DEBUG nova.network.neutron [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Successfully created port: 59c8d05c-d702-4701-8157-aa4f2da6736e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:07:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 124 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 88 op/s
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:07:44
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'vms']
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:07:44 np0005596060 nova_compute[247421]: 2026-01-26 18:07:44.089 247428 DEBUG nova.compute.manager [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdzsz1mp9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Jan 26 13:07:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:07:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:44.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:07:44 np0005596060 nova_compute[247421]: 2026-01-26 18:07:44.120 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:07:44 np0005596060 nova_compute[247421]: 2026-01-26 18:07:44.121 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:07:44 np0005596060 nova_compute[247421]: 2026-01-26 18:07:44.142 247428 INFO nova.compute.rpcapi [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Jan 26 13:07:44 np0005596060 nova_compute[247421]: 2026-01-26 18:07:44.143 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:07:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:07:44 np0005596060 nova_compute[247421]: 2026-01-26 18:07:44.603 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:45 np0005596060 nova_compute[247421]: 2026-01-26 18:07:45.490 247428 DEBUG nova.network.neutron [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Successfully updated port: 59c8d05c-d702-4701-8157-aa4f2da6736e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:07:45 np0005596060 nova_compute[247421]: 2026-01-26 18:07:45.541 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Acquiring lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:07:45 np0005596060 nova_compute[247421]: 2026-01-26 18:07:45.542 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Acquired lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:07:45 np0005596060 nova_compute[247421]: 2026-01-26 18:07:45.542 247428 DEBUG nova.network.neutron [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:07:45 np0005596060 nova_compute[247421]: 2026-01-26 18:07:45.549 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:45.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:45 np0005596060 nova_compute[247421]: 2026-01-26 18:07:45.681 247428 DEBUG nova.compute.manager [req-872d07c8-bfe0-4bd0-add4-8bb0b7822f1f req-ad59da07-8745-41ad-990d-9117e07b3e7d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Received event network-changed-59c8d05c-d702-4701-8157-aa4f2da6736e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:07:45 np0005596060 nova_compute[247421]: 2026-01-26 18:07:45.682 247428 DEBUG nova.compute.manager [req-872d07c8-bfe0-4bd0-add4-8bb0b7822f1f req-ad59da07-8745-41ad-990d-9117e07b3e7d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Refreshing instance network info cache due to event network-changed-59c8d05c-d702-4701-8157-aa4f2da6736e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:07:45 np0005596060 nova_compute[247421]: 2026-01-26 18:07:45.682 247428 DEBUG oslo_concurrency.lockutils [req-872d07c8-bfe0-4bd0-add4-8bb0b7822f1f req-ad59da07-8745-41ad-990d-9117e07b3e7d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:07:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 124 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 88 op/s
Jan 26 13:07:46 np0005596060 nova_compute[247421]: 2026-01-26 18:07:46.067 247428 DEBUG nova.network.neutron [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:07:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:46.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:47 np0005596060 nova_compute[247421]: 2026-01-26 18:07:47.420 247428 DEBUG nova.compute.manager [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdzsz1mp9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e40120ae-eb4e-4f0b-9d8f-f0210de78c4f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Jan 26 13:07:47 np0005596060 nova_compute[247421]: 2026-01-26 18:07:47.466 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "refresh_cache-e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:07:47 np0005596060 nova_compute[247421]: 2026-01-26 18:07:47.467 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquired lock "refresh_cache-e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:07:47 np0005596060 nova_compute[247421]: 2026-01-26 18:07:47.468 247428 DEBUG nova.network.neutron [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:07:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:47.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 13:07:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:48.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.354 247428 DEBUG nova.network.neutron [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Updating instance_info_cache with network_info: [{"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.377 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Releasing lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.377 247428 DEBUG nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Instance network_info: |[{"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.377 247428 DEBUG oslo_concurrency.lockutils [req-872d07c8-bfe0-4bd0-add4-8bb0b7822f1f req-ad59da07-8745-41ad-990d-9117e07b3e7d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.378 247428 DEBUG nova.network.neutron [req-872d07c8-bfe0-4bd0-add4-8bb0b7822f1f req-ad59da07-8745-41ad-990d-9117e07b3e7d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Refreshing network info cache for port 59c8d05c-d702-4701-8157-aa4f2da6736e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.381 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Start _get_guest_xml network_info=[{"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.386 247428 WARNING nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.391 247428 DEBUG nova.virt.libvirt.host [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.392 247428 DEBUG nova.virt.libvirt.host [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.395 247428 DEBUG nova.virt.libvirt.host [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.395 247428 DEBUG nova.virt.libvirt.host [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.396 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.397 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:07:31Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='832459382',id=16,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-545368924',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.397 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.398 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.398 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.398 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.399 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.399 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.399 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.400 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.400 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.400 247428 DEBUG nova.virt.hardware [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.404 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:07:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:07:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4091901364' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.864 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.905 247428 DEBUG nova.storage.rbd_utils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] rbd image 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:07:48 np0005596060 nova_compute[247421]: 2026-01-26 18:07:48.910 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.607 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:49.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:07:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2967375538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.785 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.875s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.788 247428 DEBUG nova.virt.libvirt.vif [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:07:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-379951958',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-379951958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(16),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-379951958',id=6,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=16,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLrpjwE+Ecy4AAiCXUTJSmK61q8NybCDeA5k2+vIQ8wCiO+ptwfDNsYzsnUo27lqsZd2ACx5xgmi4WnnFmM7jeMejr1yR3v6fQC/AE3qGsGMdB3DcNq1saY+RRjofMNKqw==',key_name='tempest-keypair-231640517',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a6a7c4658204ff7b58cbc0fec17a157',ramdisk_id='',reservation_id='r-2qjhyut6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-255904350',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-255904350-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:07:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4824e9871dbc4b4c84dffadc67ceb442',uuid=8c19a6a9-b54e-4bc8-a58b-a6186c2d048b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.789 247428 DEBUG nova.network.os_vif_util [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Converting VIF {"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.791 247428 DEBUG nova.network.os_vif_util [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:77:d1,bridge_name='br-int',has_traffic_filtering=True,id=59c8d05c-d702-4701-8157-aa4f2da6736e,network=Network(1ef6f1a6-165e-4b17-8f40-6c5a006288c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59c8d05c-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.794 247428 DEBUG nova.objects.instance [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.856 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <uuid>8c19a6a9-b54e-4bc8-a58b-a6186c2d048b</uuid>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <name>instance-00000006</name>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-379951958</nova:name>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:07:48</nova:creationTime>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-545368924">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <nova:user uuid="4824e9871dbc4b4c84dffadc67ceb442">tempest-ServersWithSpecificFlavorTestJSON-255904350-project-member</nova:user>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <nova:project uuid="4a6a7c4658204ff7b58cbc0fec17a157">tempest-ServersWithSpecificFlavorTestJSON-255904350</nova:project>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <nova:port uuid="59c8d05c-d702-4701-8157-aa4f2da6736e">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <entry name="serial">8c19a6a9-b54e-4bc8-a58b-a6186c2d048b</entry>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <entry name="uuid">8c19a6a9-b54e-4bc8-a58b-a6186c2d048b</entry>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk.config">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:d7:77:d1"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <target dev="tap59c8d05c-d7"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b/console.log" append="off"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:07:49 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:07:49 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:07:49 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:07:49 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.856 247428 DEBUG nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Preparing to wait for external event network-vif-plugged-59c8d05c-d702-4701-8157-aa4f2da6736e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.857 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Acquiring lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.857 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.857 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.857 247428 DEBUG nova.virt.libvirt.vif [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:07:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-379951958',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-379951958',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(16),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-379951958',id=6,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=16,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLrpjwE+Ecy4AAiCXUTJSmK61q8NybCDeA5k2+vIQ8wCiO+ptwfDNsYzsnUo27lqsZd2ACx5xgmi4WnnFmM7jeMejr1yR3v6fQC/AE3qGsGMdB3DcNq1saY+RRjofMNKqw==',key_name='tempest-keypair-231640517',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a6a7c4658204ff7b58cbc0fec17a157',ramdisk_id='',reservation_id='r-2qjhyut6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-255904350',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-255904350-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:07:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4824e9871dbc4b4c84dffadc67ceb442',uuid=8c19a6a9-b54e-4bc8-a58b-a6186c2d048b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.858 247428 DEBUG nova.network.os_vif_util [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Converting VIF {"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.858 247428 DEBUG nova.network.os_vif_util [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:77:d1,bridge_name='br-int',has_traffic_filtering=True,id=59c8d05c-d702-4701-8157-aa4f2da6736e,network=Network(1ef6f1a6-165e-4b17-8f40-6c5a006288c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59c8d05c-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.858 247428 DEBUG os_vif [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:77:d1,bridge_name='br-int',has_traffic_filtering=True,id=59c8d05c-d702-4701-8157-aa4f2da6736e,network=Network(1ef6f1a6-165e-4b17-8f40-6c5a006288c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59c8d05c-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.859 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.859 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.860 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.863 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.863 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap59c8d05c-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.863 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap59c8d05c-d7, col_values=(('external_ids', {'iface-id': '59c8d05c-d702-4701-8157-aa4f2da6736e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:77:d1', 'vm-uuid': '8c19a6a9-b54e-4bc8-a58b-a6186c2d048b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.865 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:49 np0005596060 NetworkManager[48900]: <info>  [1769450869.8657] manager: (tap59c8d05c-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.867 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.873 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:49 np0005596060 nova_compute[247421]: 2026-01-26 18:07:49.874 247428 INFO os_vif [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:77:d1,bridge_name='br-int',has_traffic_filtering=True,id=59c8d05c-d702-4701-8157-aa4f2da6736e,network=Network(1ef6f1a6-165e-4b17-8f40-6c5a006288c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59c8d05c-d7')#033[00m
Jan 26 13:07:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 956 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.014 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.014 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.014 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] No VIF found with MAC fa:16:3e:d7:77:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.015 247428 INFO nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Using config drive#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.046 247428 DEBUG nova.storage.rbd_utils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] rbd image 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:07:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:50.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.552 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.893 247428 DEBUG nova.network.neutron [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Updating instance_info_cache with network_info: [{"id": "06538465-e309-4216-af1a-244565d3805b", "address": "fa:16:3e:35:48:ae", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06538465-e3", "ovs_interfaceid": "06538465-e309-4216-af1a-244565d3805b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.900 247428 INFO nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Creating config drive at /var/lib/nova/instances/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b/disk.config#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.912 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx7o4lbk2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.979 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Releasing lock "refresh_cache-e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.983 247428 DEBUG nova.virt.libvirt.driver [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdzsz1mp9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e40120ae-eb4e-4f0b-9d8f-f0210de78c4f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.984 247428 DEBUG nova.virt.libvirt.driver [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Creating instance directory: /var/lib/nova/instances/e40120ae-eb4e-4f0b-9d8f-f0210de78c4f pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.984 247428 DEBUG nova.virt.libvirt.driver [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Ensure instance console log exists: /var/lib/nova/instances/e40120ae-eb4e-4f0b-9d8f-f0210de78c4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.985 247428 DEBUG nova.virt.libvirt.driver [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.986 247428 DEBUG nova.virt.libvirt.vif [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:07:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1296850176',display_name='tempest-LiveMigrationTest-server-1296850176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1296850176',id=5,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:07:36Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b1f2cad350784d7eae39fc23fb032500',ramdisk_id='',reservation_id='r-02y9chrd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-877386369',owner_user_name='tempest-LiveMigrationTest-877386369-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:07:36Z,user_data=None,user_id='9e3f505042e7463683259f02e8e59eca',uuid=e40120ae-eb4e-4f0b-9d8f-f0210de78c4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06538465-e309-4216-af1a-244565d3805b", "address": "fa:16:3e:35:48:ae", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap06538465-e3", "ovs_interfaceid": "06538465-e309-4216-af1a-244565d3805b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.987 247428 DEBUG nova.network.os_vif_util [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Converting VIF {"id": "06538465-e309-4216-af1a-244565d3805b", "address": "fa:16:3e:35:48:ae", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap06538465-e3", "ovs_interfaceid": "06538465-e309-4216-af1a-244565d3805b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.988 247428 DEBUG nova.network.os_vif_util [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:48:ae,bridge_name='br-int',has_traffic_filtering=True,id=06538465-e309-4216-af1a-244565d3805b,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap06538465-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.989 247428 DEBUG os_vif [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:48:ae,bridge_name='br-int',has_traffic_filtering=True,id=06538465-e309-4216-af1a-244565d3805b,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap06538465-e3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.990 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.991 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.991 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.995 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.995 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06538465-e3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.996 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06538465-e3, col_values=(('external_ids', {'iface-id': '06538465-e309-4216-af1a-244565d3805b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:48:ae', 'vm-uuid': 'e40120ae-eb4e-4f0b-9d8f-f0210de78c4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:07:50 np0005596060 nova_compute[247421]: 2026-01-26 18:07:50.998 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:50 np0005596060 NetworkManager[48900]: <info>  [1769450870.9990] manager: (tap06538465-e3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.000 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.007 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.009 247428 INFO os_vif [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:48:ae,bridge_name='br-int',has_traffic_filtering=True,id=06538465-e309-4216-af1a-244565d3805b,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap06538465-e3')#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.010 247428 DEBUG nova.virt.libvirt.driver [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.010 247428 DEBUG nova.compute.manager [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdzsz1mp9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e40120ae-eb4e-4f0b-9d8f-f0210de78c4f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.071 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx7o4lbk2" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.095 247428 DEBUG nova.storage.rbd_utils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] rbd image 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.098 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b/disk.config 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.244 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.245 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.246 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.369 247428 DEBUG oslo_concurrency.processutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b/disk.config 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.370 247428 INFO nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Deleting local config drive /var/lib/nova/instances/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b/disk.config because it was imported into RBD.#033[00m
Jan 26 13:07:51 np0005596060 podman[255548]: 2026-01-26 18:07:51.378961596 +0000 UTC m=+0.289341395 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 13:07:51 np0005596060 kernel: tap59c8d05c-d7: entered promiscuous mode
Jan 26 13:07:51 np0005596060 NetworkManager[48900]: <info>  [1769450871.4387] manager: (tap59c8d05c-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 26 13:07:51 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:51Z|00036|binding|INFO|Claiming lport 59c8d05c-d702-4701-8157-aa4f2da6736e for this chassis.
Jan 26 13:07:51 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:51Z|00037|binding|INFO|59c8d05c-d702-4701-8157-aa4f2da6736e: Claiming fa:16:3e:d7:77:d1 10.100.0.13
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.441 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:51 np0005596060 systemd-machined[213879]: New machine qemu-3-instance-00000006.
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.483 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:77:d1 10.100.0.13'], port_security=['fa:16:3e:d7:77:d1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8c19a6a9-b54e-4bc8-a58b-a6186c2d048b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef6f1a6-165e-4b17-8f40-6c5a006288c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a6a7c4658204ff7b58cbc0fec17a157', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc23a8c7-7183-4d87-9ed1-326712e58ede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fecc7817-277f-455e-9d40-c1c58433f73c, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=59c8d05c-d702-4701-8157-aa4f2da6736e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.485 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 59c8d05c-d702-4701-8157-aa4f2da6736e in datapath 1ef6f1a6-165e-4b17-8f40-6c5a006288c4 bound to our chassis#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.486 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1ef6f1a6-165e-4b17-8f40-6c5a006288c4#033[00m
Jan 26 13:07:51 np0005596060 systemd[1]: Started Virtual Machine qemu-3-instance-00000006.
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.497 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f269e775-b166-41cf-a2f6-ed2024294c52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.498 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1ef6f1a6-11 in ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.500 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1ef6f1a6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.500 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[2a028eda-6b70-496c-9e84-e1b8fdbd5dc2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.501 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d563c9-4e4e-460b-a812-adff2b98286c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 podman[255548]: 2026-01-26 18:07:51.518600434 +0000 UTC m=+0.428980303 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:07:51 np0005596060 systemd-udevd[255624]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.526 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[e87ae403-7070-4683-85cd-f79cf477ca14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 NetworkManager[48900]: <info>  [1769450871.5340] device (tap59c8d05c-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:07:51 np0005596060 NetworkManager[48900]: <info>  [1769450871.5347] device (tap59c8d05c-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.540 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:51 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:51Z|00038|binding|INFO|Setting lport 59c8d05c-d702-4701-8157-aa4f2da6736e ovn-installed in OVS
Jan 26 13:07:51 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:51Z|00039|binding|INFO|Setting lport 59c8d05c-d702-4701-8157-aa4f2da6736e up in Southbound
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.543 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0a3a37-f7b3-4146-b204-6b1bdd081ae6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.545 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.581 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[a16dab67-4258-4abb-8c7b-e8bf93e41367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.586 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[acf7e143-bf3e-4f80-a0e1-054b3e98e23e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 NetworkManager[48900]: <info>  [1769450871.5873] manager: (tap1ef6f1a6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.621 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[cd21c644-000f-409d-8dca-a04880f8a206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.625 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[677c9aaa-80fe-4008-b586-a0ab3baf311a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 NetworkManager[48900]: <info>  [1769450871.6488] device (tap1ef6f1a6-10): carrier: link connected
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.654 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[4d630014-8be9-4ca8-bf32-18dcc572c209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:51.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.673 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[14157650-0720-4deb-88a6-4e9eca72056e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ef6f1a6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:42:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463623, 'reachable_time': 34064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255661, 'error': None, 'target': 'ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.675 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.690 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fd2e46-d8f2-45fc-9501-82935f4f8f86]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:42d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463623, 'tstamp': 463623}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255662, 'error': None, 'target': 'ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.723 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[6025c24d-5102-4e2d-8e39-3e63c729e42a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1ef6f1a6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:42:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463623, 'reachable_time': 34064, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255665, 'error': None, 'target': 'ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.771 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[033b456d-09b1-435b-98aa-408dc26ae094]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.837 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a2196500-90cc-4df4-997f-337a58b1f873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.839 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ef6f1a6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.839 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.839 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ef6f1a6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.841 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:51 np0005596060 NetworkManager[48900]: <info>  [1769450871.8423] manager: (tap1ef6f1a6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Jan 26 13:07:51 np0005596060 kernel: tap1ef6f1a6-10: entered promiscuous mode
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.848 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.850 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1ef6f1a6-10, col_values=(('external_ids', {'iface-id': '79905c80-0fa3-48cf-98bd-969990cc351a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.851 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:51 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:51Z|00040|binding|INFO|Releasing lport 79905c80-0fa3-48cf-98bd-969990cc351a from this chassis (sb_readonly=0)
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.882 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:51 np0005596060 nova_compute[247421]: 2026-01-26 18:07:51.888 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.890 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1ef6f1a6-165e-4b17-8f40-6c5a006288c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1ef6f1a6-165e-4b17-8f40-6c5a006288c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.891 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e24eef75-38a9-4315-a437-01de491c2259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.892 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-1ef6f1a6-165e-4b17-8f40-6c5a006288c4
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/1ef6f1a6-165e-4b17-8f40-6c5a006288c4.pid.haproxy
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 1ef6f1a6-165e-4b17-8f40-6c5a006288c4
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:07:51 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:51.893 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4', 'env', 'PROCESS_TAG=haproxy-1ef6f1a6-165e-4b17-8f40-6c5a006288c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1ef6f1a6-165e-4b17-8f40-6c5a006288c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:07:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 140 MiB data, 280 MiB used, 21 GiB / 21 GiB avail; 976 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Jan 26 13:07:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:52.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:52 np0005596060 nova_compute[247421]: 2026-01-26 18:07:52.211 247428 DEBUG nova.network.neutron [req-872d07c8-bfe0-4bd0-add4-8bb0b7822f1f req-ad59da07-8745-41ad-990d-9117e07b3e7d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Updated VIF entry in instance network info cache for port 59c8d05c-d702-4701-8157-aa4f2da6736e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:07:52 np0005596060 nova_compute[247421]: 2026-01-26 18:07:52.212 247428 DEBUG nova.network.neutron [req-872d07c8-bfe0-4bd0-add4-8bb0b7822f1f req-ad59da07-8745-41ad-990d-9117e07b3e7d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Updating instance_info_cache with network_info: [{"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:07:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:07:52.249 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:07:52 np0005596060 podman[255803]: 2026-01-26 18:07:52.28100795 +0000 UTC m=+0.030410472 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:07:52 np0005596060 podman[255803]: 2026-01-26 18:07:52.468365256 +0000 UTC m=+0.217767708 container create 7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 13:07:52 np0005596060 nova_compute[247421]: 2026-01-26 18:07:52.554 247428 DEBUG oslo_concurrency.lockutils [req-872d07c8-bfe0-4bd0-add4-8bb0b7822f1f req-ad59da07-8745-41ad-990d-9117e07b3e7d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:07:52 np0005596060 systemd[1]: Started libpod-conmon-7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb.scope.
Jan 26 13:07:52 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:07:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c40ec4c03ff285927843130cdc4e42e50d14618a446bfcda30a8903dcd833a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:07:52 np0005596060 nova_compute[247421]: 2026-01-26 18:07:52.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:07:52 np0005596060 nova_compute[247421]: 2026-01-26 18:07:52.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:07:52 np0005596060 nova_compute[247421]: 2026-01-26 18:07:52.710 247428 DEBUG nova.compute.manager [req-1433f254-397f-4a30-bc12-1ea628aa4fab req-f85a4011-c982-4130-b533-db287f701d84 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Received event network-vif-plugged-59c8d05c-d702-4701-8157-aa4f2da6736e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:07:52 np0005596060 nova_compute[247421]: 2026-01-26 18:07:52.712 247428 DEBUG oslo_concurrency.lockutils [req-1433f254-397f-4a30-bc12-1ea628aa4fab req-f85a4011-c982-4130-b533-db287f701d84 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:52 np0005596060 nova_compute[247421]: 2026-01-26 18:07:52.712 247428 DEBUG oslo_concurrency.lockutils [req-1433f254-397f-4a30-bc12-1ea628aa4fab req-f85a4011-c982-4130-b533-db287f701d84 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:52 np0005596060 nova_compute[247421]: 2026-01-26 18:07:52.713 247428 DEBUG oslo_concurrency.lockutils [req-1433f254-397f-4a30-bc12-1ea628aa4fab req-f85a4011-c982-4130-b533-db287f701d84 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:52 np0005596060 nova_compute[247421]: 2026-01-26 18:07:52.713 247428 DEBUG nova.compute.manager [req-1433f254-397f-4a30-bc12-1ea628aa4fab req-f85a4011-c982-4130-b533-db287f701d84 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Processing event network-vif-plugged-59c8d05c-d702-4701-8157-aa4f2da6736e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:07:52 np0005596060 podman[255849]: 2026-01-26 18:07:52.723774993 +0000 UTC m=+0.153930392 container exec e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 13:07:52 np0005596060 podman[255803]: 2026-01-26 18:07:52.732213012 +0000 UTC m=+0.481615504 container init 7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 13:07:52 np0005596060 podman[255803]: 2026-01-26 18:07:52.738041056 +0000 UTC m=+0.487443518 container start 7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 13:07:52 np0005596060 neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4[255851]: [NOTICE]   (255875) : New worker (255882) forked
Jan 26 13:07:52 np0005596060 neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4[255851]: [NOTICE]   (255875) : Loading success.
Jan 26 13:07:52 np0005596060 podman[255849]: 2026-01-26 18:07:52.815198691 +0000 UTC m=+0.245354120 container exec_died e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.206 247428 DEBUG nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.208 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450873.2071645, 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.208 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] VM Started (Lifecycle Event)#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.211 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.215 247428 INFO nova.virt.libvirt.driver [-] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Instance spawned successfully.#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.216 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.259 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.267 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:07:53 np0005596060 podman[255956]: 2026-01-26 18:07:53.282131661 +0000 UTC m=+0.091083490 container exec 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, architecture=x86_64, name=keepalived, description=keepalived for Ceph, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vendor=Red Hat, Inc., version=2.2.4, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.289 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.290 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.291 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.292 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.293 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.294 247428 DEBUG nova.virt.libvirt.driver [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.318 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.319 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450873.2073443, 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.320 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:07:53 np0005596060 podman[255956]: 2026-01-26 18:07:53.333827767 +0000 UTC m=+0.142779596 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, distribution-scope=public, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, name=keepalived, release=1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.400 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.406 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450873.2107794, 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.407 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:07:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.437 247428 INFO nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Took 12.08 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.439 247428 DEBUG nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.458 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.471 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.547 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.591 247428 INFO nova.compute.manager [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Took 13.37 seconds to build instance.#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.644 247428 DEBUG oslo_concurrency.lockutils [None req-25975951-bea7-4677-a550-458d92aeafe2 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.647 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:07:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:53.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.688 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.689 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.727 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.727 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.728 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.728 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:07:53 np0005596060 nova_compute[247421]: 2026-01-26 18:07:53.729 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:07:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 160 MiB data, 300 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.8 MiB/s wr, 106 op/s
Jan 26 13:07:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:07:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:54.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:07:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:07:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3001369160' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.178 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.306 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.307 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:07:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:07:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:07:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.505 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.506 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4757MB free_disk=20.94662857055664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.507 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.507 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.618 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Migration for instance e40120ae-eb4e-4f0b-9d8f-f0210de78c4f refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.662 247428 INFO nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Updating resource usage from migration 1b877e7a-f025-4e3a-b89d-0d8bb1ffb592#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.662 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Starting to track incoming migration 1b877e7a-f025-4e3a-b89d-0d8bb1ffb592 with flavor c19d349c-ad8f-4453-bd9e-1248725b13ed _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 26 13:07:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.762 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.870 247428 WARNING nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance e40120ae-eb4e-4f0b-9d8f-f0210de78c4f has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}.#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.872 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.873 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.933 247428 DEBUG nova.compute.manager [req-d842ad80-2606-4cd4-952b-a153212645a7 req-8e8050e3-1d24-4a78-8db4-5449dac85809 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Received event network-vif-plugged-59c8d05c-d702-4701-8157-aa4f2da6736e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.934 247428 DEBUG oslo_concurrency.lockutils [req-d842ad80-2606-4cd4-952b-a153212645a7 req-8e8050e3-1d24-4a78-8db4-5449dac85809 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.935 247428 DEBUG oslo_concurrency.lockutils [req-d842ad80-2606-4cd4-952b-a153212645a7 req-8e8050e3-1d24-4a78-8db4-5449dac85809 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.935 247428 DEBUG oslo_concurrency.lockutils [req-d842ad80-2606-4cd4-952b-a153212645a7 req-8e8050e3-1d24-4a78-8db4-5449dac85809 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.936 247428 DEBUG nova.compute.manager [req-d842ad80-2606-4cd4-952b-a153212645a7 req-8e8050e3-1d24-4a78-8db4-5449dac85809 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] No waiting events found dispatching network-vif-plugged-59c8d05c-d702-4701-8157-aa4f2da6736e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.937 247428 WARNING nova.compute.manager [req-d842ad80-2606-4cd4-952b-a153212645a7 req-8e8050e3-1d24-4a78-8db4-5449dac85809 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Received unexpected event network-vif-plugged-59c8d05c-d702-4701-8157-aa4f2da6736e for instance with vm_state active and task_state None.#033[00m
Jan 26 13:07:54 np0005596060 nova_compute[247421]: 2026-01-26 18:07:54.981 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:07:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:07:55 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.442103) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450875442194, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1081, "num_deletes": 251, "total_data_size": 1652859, "memory_usage": 1678704, "flush_reason": "Manual Compaction"}
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450875511606, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1622611, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20832, "largest_seqno": 21912, "table_properties": {"data_size": 1617509, "index_size": 2562, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11582, "raw_average_key_size": 19, "raw_value_size": 1606949, "raw_average_value_size": 2765, "num_data_blocks": 115, "num_entries": 581, "num_filter_entries": 581, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769450784, "oldest_key_time": 1769450784, "file_creation_time": 1769450875, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 69576 microseconds, and 5144 cpu microseconds.
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.511683) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1622611 bytes OK
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.511714) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.526209) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.526251) EVENT_LOG_v1 {"time_micros": 1769450875526240, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.526277) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1647909, prev total WAL file size 1682762, number of live WAL files 2.
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.526979) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1584KB)], [47(8078KB)]
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450875527010, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9895302, "oldest_snapshot_seqno": -1}
Jan 26 13:07:55 np0005596060 nova_compute[247421]: 2026-01-26 18:07:55.554 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3534311434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:07:55 np0005596060 nova_compute[247421]: 2026-01-26 18:07:55.598 247428 DEBUG nova.network.neutron [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Port 06538465-e309-4216-af1a-244565d3805b updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Jan 26 13:07:55 np0005596060 nova_compute[247421]: 2026-01-26 18:07:55.600 247428 DEBUG nova.compute.manager [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpdzsz1mp9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e40120ae-eb4e-4f0b-9d8f-f0210de78c4f',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Jan 26 13:07:55 np0005596060 nova_compute[247421]: 2026-01-26 18:07:55.616 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:07:55 np0005596060 nova_compute[247421]: 2026-01-26 18:07:55.624 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4758 keys, 7858411 bytes, temperature: kUnknown
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450875650430, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7858411, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7826968, "index_size": 18469, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11909, "raw_key_size": 119284, "raw_average_key_size": 25, "raw_value_size": 7741081, "raw_average_value_size": 1626, "num_data_blocks": 756, "num_entries": 4758, "num_filter_entries": 4758, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769450875, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:07:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:07:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:55.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.650850) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7858411 bytes
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.668257) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.1 rd, 63.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(10.9) write-amplify(4.8) OK, records in: 5278, records dropped: 520 output_compression: NoCompression
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.668321) EVENT_LOG_v1 {"time_micros": 1769450875668299, "job": 24, "event": "compaction_finished", "compaction_time_micros": 123590, "compaction_time_cpu_micros": 19002, "output_level": 6, "num_output_files": 1, "total_output_size": 7858411, "num_input_records": 5278, "num_output_records": 4758, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450875668952, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450875670809, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.526897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.670881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.670888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.670890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.670891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:07:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:07:55.670893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:07:55 np0005596060 nova_compute[247421]: 2026-01-26 18:07:55.676 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:07:55 np0005596060 nova_compute[247421]: 2026-01-26 18:07:55.733 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:07:55 np0005596060 nova_compute[247421]: 2026-01-26 18:07:55.734 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:07:55 np0005596060 systemd[1]: Starting libvirt proxy daemon...
Jan 26 13:07:55 np0005596060 systemd[1]: Started libvirt proxy daemon.
Jan 26 13:07:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 160 MiB data, 300 MiB used, 21 GiB / 21 GiB avail; 289 KiB/s rd, 2.3 MiB/s wr, 64 op/s
Jan 26 13:07:56 np0005596060 nova_compute[247421]: 2026-01-26 18:07:55.998 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:56 np0005596060 kernel: tap06538465-e3: entered promiscuous mode
Jan 26 13:07:56 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:56Z|00041|binding|INFO|Claiming lport 06538465-e309-4216-af1a-244565d3805b for this additional chassis.
Jan 26 13:07:56 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:56Z|00042|binding|INFO|06538465-e309-4216-af1a-244565d3805b: Claiming fa:16:3e:35:48:ae 10.100.0.14
Jan 26 13:07:56 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:56Z|00043|binding|INFO|Claiming lport 8efebc34-f8eb-42e5-af94-78e84c0dcbba for this additional chassis.
Jan 26 13:07:56 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:56Z|00044|binding|INFO|8efebc34-f8eb-42e5-af94-78e84c0dcbba: Claiming fa:16:3e:c6:69:fa 19.80.0.72
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.0323] manager: (tap06538465-e3): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Jan 26 13:07:56 np0005596060 nova_compute[247421]: 2026-01-26 18:07:56.037 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:56 np0005596060 systemd-udevd[256194]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:07:56 np0005596060 systemd-machined[213879]: New machine qemu-4-instance-00000005.
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.0905] device (tap06538465-e3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.0914] device (tap06538465-e3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:07:56 np0005596060 systemd[1]: Started Virtual Machine qemu-4-instance-00000005.
Jan 26 13:07:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:56 np0005596060 nova_compute[247421]: 2026-01-26 18:07:56.148 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:56 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:56Z|00045|binding|INFO|Setting lport 06538465-e309-4216-af1a-244565d3805b ovn-installed in OVS
Jan 26 13:07:56 np0005596060 nova_compute[247421]: 2026-01-26 18:07:56.156 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:07:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:07:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:07:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:07:56 np0005596060 nova_compute[247421]: 2026-01-26 18:07:56.696 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:07:56 np0005596060 nova_compute[247421]: 2026-01-26 18:07:56.697 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:07:56 np0005596060 nova_compute[247421]: 2026-01-26 18:07:56.697 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.9843] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/34)
Jan 26 13:07:56 np0005596060 nova_compute[247421]: 2026-01-26 18:07:56.982 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.9855] device (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <warn>  [1769450876.9857] device (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.9874] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/35)
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.9883] device (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <warn>  [1769450876.9885] device (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.9904] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 26 13:07:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:07:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.9932] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.9944] device (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 13:07:56 np0005596060 NetworkManager[48900]: <info>  [1769450876.9954] device (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 13:07:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:07:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.109 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:57 np0005596060 ovn_controller[148842]: 2026-01-26T18:07:57Z|00046|binding|INFO|Releasing lport 79905c80-0fa3-48cf-98bd-969990cc351a from this chassis (sb_readonly=0)
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.125 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:07:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 72f0c42d-7878-4a04-83fc-7c48d07ff778 does not exist
Jan 26 13:07:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 1247362f-8786-418e-a850-d74279cd74c6 does not exist
Jan 26 13:07:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev bd5f1fce-98a1-47c5-aea5-d78f2c22a490 does not exist
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:07:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.385 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450877.384605, e40120ae-eb4e-4f0b-9d8f-f0210de78c4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.387 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] VM Started (Lifecycle Event)#033[00m
Jan 26 13:07:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:57.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.719 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:07:57 np0005596060 podman[256388]: 2026-01-26 18:07:57.79802744 +0000 UTC m=+0.063915470 container create 9ae70ec0a0175c6ec1761aaa9566712b77e4162d9b9bc03d3c211bcf9a5116d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.813 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.813 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquired lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.813 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.813 247428 DEBUG nova.objects.instance [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:07:57 np0005596060 systemd[1]: Started libpod-conmon-9ae70ec0a0175c6ec1761aaa9566712b77e4162d9b9bc03d3c211bcf9a5116d8.scope.
Jan 26 13:07:57 np0005596060 podman[256388]: 2026-01-26 18:07:57.760684527 +0000 UTC m=+0.026572597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:07:57 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.893 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450877.893724, e40120ae-eb4e-4f0b-9d8f-f0210de78c4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.895 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:07:57 np0005596060 podman[256388]: 2026-01-26 18:07:57.913416019 +0000 UTC m=+0.179304079 container init 9ae70ec0a0175c6ec1761aaa9566712b77e4162d9b9bc03d3c211bcf9a5116d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:07:57 np0005596060 podman[256388]: 2026-01-26 18:07:57.919621482 +0000 UTC m=+0.185509522 container start 9ae70ec0a0175c6ec1761aaa9566712b77e4162d9b9bc03d3c211bcf9a5116d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:07:57 np0005596060 vigorous_khayyam[256405]: 167 167
Jan 26 13:07:57 np0005596060 systemd[1]: libpod-9ae70ec0a0175c6ec1761aaa9566712b77e4162d9b9bc03d3c211bcf9a5116d8.scope: Deactivated successfully.
Jan 26 13:07:57 np0005596060 podman[256388]: 2026-01-26 18:07:57.944688101 +0000 UTC m=+0.210576141 container attach 9ae70ec0a0175c6ec1761aaa9566712b77e4162d9b9bc03d3c211bcf9a5116d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:07:57 np0005596060 podman[256388]: 2026-01-26 18:07:57.946857305 +0000 UTC m=+0.212745355 container died 9ae70ec0a0175c6ec1761aaa9566712b77e4162d9b9bc03d3c211bcf9a5116d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 13:07:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 167 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 153 op/s
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.990 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:07:57 np0005596060 nova_compute[247421]: 2026-01-26 18:07:57.994 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:07:58 np0005596060 systemd[1]: var-lib-containers-storage-overlay-9e3774cc686d4f5e39faeb48fa1a96c6eec19421cadb0f084bb6ecc6782fe463-merged.mount: Deactivated successfully.
Jan 26 13:07:58 np0005596060 nova_compute[247421]: 2026-01-26 18:07:58.057 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 26 13:07:58 np0005596060 podman[256388]: 2026-01-26 18:07:58.12082327 +0000 UTC m=+0.386711320 container remove 9ae70ec0a0175c6ec1761aaa9566712b77e4162d9b9bc03d3c211bcf9a5116d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 13:07:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:07:58.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:58 np0005596060 systemd[1]: libpod-conmon-9ae70ec0a0175c6ec1761aaa9566712b77e4162d9b9bc03d3c211bcf9a5116d8.scope: Deactivated successfully.
Jan 26 13:07:58 np0005596060 podman[256431]: 2026-01-26 18:07:58.396465296 +0000 UTC m=+0.108910490 container create f824278b93b9f1329fd4cbd30a76adf9d90ac9ed2ba7820bd2312ba1c1891b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:07:58 np0005596060 podman[256431]: 2026-01-26 18:07:58.335040269 +0000 UTC m=+0.047485493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:07:58 np0005596060 systemd[1]: Started libpod-conmon-f824278b93b9f1329fd4cbd30a76adf9d90ac9ed2ba7820bd2312ba1c1891b1d.scope.
Jan 26 13:07:58 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:07:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df3f674d6e308f7bd33813a617ab237fb5ad2c1f738b39f4f20e8e5cd1f1cc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:07:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df3f674d6e308f7bd33813a617ab237fb5ad2c1f738b39f4f20e8e5cd1f1cc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:07:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df3f674d6e308f7bd33813a617ab237fb5ad2c1f738b39f4f20e8e5cd1f1cc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:07:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df3f674d6e308f7bd33813a617ab237fb5ad2c1f738b39f4f20e8e5cd1f1cc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:07:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df3f674d6e308f7bd33813a617ab237fb5ad2c1f738b39f4f20e8e5cd1f1cc4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:07:58 np0005596060 podman[256431]: 2026-01-26 18:07:58.59186396 +0000 UTC m=+0.304309204 container init f824278b93b9f1329fd4cbd30a76adf9d90ac9ed2ba7820bd2312ba1c1891b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:07:58 np0005596060 podman[256431]: 2026-01-26 18:07:58.601997571 +0000 UTC m=+0.314442775 container start f824278b93b9f1329fd4cbd30a76adf9d90ac9ed2ba7820bd2312ba1c1891b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:07:58 np0005596060 podman[256431]: 2026-01-26 18:07:58.615611257 +0000 UTC m=+0.328056421 container attach f824278b93b9f1329fd4cbd30a76adf9d90ac9ed2ba7820bd2312ba1c1891b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:07:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:07:58 np0005596060 nova_compute[247421]: 2026-01-26 18:07:58.757 247428 DEBUG nova.compute.manager [req-9be93288-3b0a-4909-9641-ae7b87708b75 req-1448dec0-0d42-4f0b-8ce3-ae8b475648f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Received event network-changed-59c8d05c-d702-4701-8157-aa4f2da6736e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:07:58 np0005596060 nova_compute[247421]: 2026-01-26 18:07:58.759 247428 DEBUG nova.compute.manager [req-9be93288-3b0a-4909-9641-ae7b87708b75 req-1448dec0-0d42-4f0b-8ce3-ae8b475648f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Refreshing instance network info cache due to event network-changed-59c8d05c-d702-4701-8157-aa4f2da6736e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:07:58 np0005596060 nova_compute[247421]: 2026-01-26 18:07:58.759 247428 DEBUG oslo_concurrency.lockutils [req-9be93288-3b0a-4909-9641-ae7b87708b75 req-1448dec0-0d42-4f0b-8ce3-ae8b475648f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:07:59 np0005596060 nostalgic_dubinsky[256448]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:07:59 np0005596060 nostalgic_dubinsky[256448]: --> relative data size: 1.0
Jan 26 13:07:59 np0005596060 nostalgic_dubinsky[256448]: --> All data devices are unavailable
Jan 26 13:07:59 np0005596060 systemd[1]: libpod-f824278b93b9f1329fd4cbd30a76adf9d90ac9ed2ba7820bd2312ba1c1891b1d.scope: Deactivated successfully.
Jan 26 13:07:59 np0005596060 podman[256431]: 2026-01-26 18:07:59.44754437 +0000 UTC m=+1.159989564 container died f824278b93b9f1329fd4cbd30a76adf9d90ac9ed2ba7820bd2312ba1c1891b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:07:59 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6df3f674d6e308f7bd33813a617ab237fb5ad2c1f738b39f4f20e8e5cd1f1cc4-merged.mount: Deactivated successfully.
Jan 26 13:07:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:07:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:07:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:07:59.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:07:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 167 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 26 13:08:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:08:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:00.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:08:00 np0005596060 podman[256431]: 2026-01-26 18:08:00.517674424 +0000 UTC m=+2.230119618 container remove f824278b93b9f1329fd4cbd30a76adf9d90ac9ed2ba7820bd2312ba1c1891b1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 13:08:00 np0005596060 systemd[1]: libpod-conmon-f824278b93b9f1329fd4cbd30a76adf9d90ac9ed2ba7820bd2312ba1c1891b1d.scope: Deactivated successfully.
Jan 26 13:08:00 np0005596060 nova_compute[247421]: 2026-01-26 18:08:00.557 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:00 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:00Z|00047|binding|INFO|Claiming lport 06538465-e309-4216-af1a-244565d3805b for this chassis.
Jan 26 13:08:00 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:00Z|00048|binding|INFO|06538465-e309-4216-af1a-244565d3805b: Claiming fa:16:3e:35:48:ae 10.100.0.14
Jan 26 13:08:00 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:00Z|00049|binding|INFO|Claiming lport 8efebc34-f8eb-42e5-af94-78e84c0dcbba for this chassis.
Jan 26 13:08:00 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:00Z|00050|binding|INFO|8efebc34-f8eb-42e5-af94-78e84c0dcbba: Claiming fa:16:3e:c6:69:fa 19.80.0.72
Jan 26 13:08:00 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:00Z|00051|binding|INFO|Setting lport 06538465-e309-4216-af1a-244565d3805b up in Southbound
Jan 26 13:08:00 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:00Z|00052|binding|INFO|Setting lport 8efebc34-f8eb-42e5-af94-78e84c0dcbba up in Southbound
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.915 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:48:ae 10.100.0.14'], port_security=['fa:16:3e:35:48:ae 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1321931442', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e40120ae-eb4e-4f0b-9d8f-f0210de78c4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0516cc55-93b8-4bf2-b595-d07702fa255b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1321931442', 'neutron:project_id': 'b1f2cad350784d7eae39fc23fb032500', 'neutron:revision_number': '11', 'neutron:security_group_ids': '4e1bd851-4cc2-4677-be2e-39f74460bffd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db9bad5b-1a88-4481-85c1-c131f59dea19, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=06538465-e309-4216-af1a-244565d3805b) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.917 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:69:fa 19.80.0.72'], port_security=['fa:16:3e:c6:69:fa 19.80.0.72'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['06538465-e309-4216-af1a-244565d3805b'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-2075617635', 'neutron:cidrs': '19.80.0.72/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebb9e0b4-8385-462a-84cc-87c6f72c0c65', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-2075617635', 'neutron:project_id': 'b1f2cad350784d7eae39fc23fb032500', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4e1bd851-4cc2-4677-be2e-39f74460bffd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=75dd0954-cbf3-4a3e-a6ef-19fcd101cc5d, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8efebc34-f8eb-42e5-af94-78e84c0dcbba) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.918 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 06538465-e309-4216-af1a-244565d3805b in datapath 0516cc55-93b8-4bf2-b595-d07702fa255b bound to our chassis#033[00m
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.920 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0516cc55-93b8-4bf2-b595-d07702fa255b#033[00m
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.935 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7676cb88-5a6f-474f-88f4-32c44b06784c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.936 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0516cc55-91 in ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.938 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0516cc55-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.939 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b107ae4d-fb28-41d6-b61c-56693e4c48c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.939 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d7392c24-9da7-46d0-b6ad-61711383a430]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.954 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[4414e7ea-80cc-42fe-ac33-fc9f0451347a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:00.972 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a744cce5-9dfd-4696-82dc-2b94097ae5c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 nova_compute[247421]: 2026-01-26 18:08:01.050 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.049 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[0242ce05-647a-49c3-b721-2fc04d83bd11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.055 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[6f259cc9-2004-47a1-8d08-4fe29dd20f1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 NetworkManager[48900]: <info>  [1769450881.0611] manager: (tap0516cc55-90): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.084 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5d73a3-2c03-4f8d-b400-df8a6cd4f164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.086 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[896bc7e0-9d38-43fa-9d4a-26c1733c0220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 systemd-udevd[256616]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:08:01 np0005596060 NetworkManager[48900]: <info>  [1769450881.1137] device (tap0516cc55-90): carrier: link connected
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.118 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[14449a7d-a6bd-4b17-b90f-0b0b7074faab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.139 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d0520fa1-c762-4b06-8341-4761b8aeed8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0516cc55-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:40:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464569, 'reachable_time': 27278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256620, 'error': None, 'target': 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.155 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[68bd31d9-330b-4485-8c91-1b689dcea7aa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:40ef'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464569, 'tstamp': 464569}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256638, 'error': None, 'target': 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.169 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[493f0295-43b6-4e52-b99e-8fdbafa7d407]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0516cc55-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:40:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 90, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464569, 'reachable_time': 27278, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256648, 'error': None, 'target': 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.208 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[244d0f17-1da5-4d5c-8ea1-910d3a7e004c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 podman[256641]: 2026-01-26 18:08:01.210197065 +0000 UTC m=+0.050845236 container create 5d43f9d33f39a9fae30c48bea11a261e62d485ee3789710ee63e083fe5260699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mcnulty, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 13:08:01 np0005596060 systemd[1]: Started libpod-conmon-5d43f9d33f39a9fae30c48bea11a261e62d485ee3789710ee63e083fe5260699.scope.
Jan 26 13:08:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:08:01 np0005596060 podman[256641]: 2026-01-26 18:08:01.190290384 +0000 UTC m=+0.030938575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.298 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[6e733a39-976d-4b32-968f-d69da7cbc1b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 podman[256641]: 2026-01-26 18:08:01.29985962 +0000 UTC m=+0.140507811 container init 5d43f9d33f39a9fae30c48bea11a261e62d485ee3789710ee63e083fe5260699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mcnulty, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.300 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0516cc55-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.300 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.300 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0516cc55-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:08:01 np0005596060 kernel: tap0516cc55-90: entered promiscuous mode
Jan 26 13:08:01 np0005596060 NetworkManager[48900]: <info>  [1769450881.3029] manager: (tap0516cc55-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Jan 26 13:08:01 np0005596060 nova_compute[247421]: 2026-01-26 18:08:01.302 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.305 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0516cc55-90, col_values=(('external_ids', {'iface-id': '46cfbba6-430a-495c-9d6a-60cf58c877d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:08:01 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:01Z|00053|binding|INFO|Releasing lport 46cfbba6-430a-495c-9d6a-60cf58c877d3 from this chassis (sb_readonly=0)
Jan 26 13:08:01 np0005596060 nova_compute[247421]: 2026-01-26 18:08:01.307 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:01 np0005596060 podman[256641]: 2026-01-26 18:08:01.30958853 +0000 UTC m=+0.150236711 container start 5d43f9d33f39a9fae30c48bea11a261e62d485ee3789710ee63e083fe5260699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:08:01 np0005596060 podman[256641]: 2026-01-26 18:08:01.312941403 +0000 UTC m=+0.153589574 container attach 5d43f9d33f39a9fae30c48bea11a261e62d485ee3789710ee63e083fe5260699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mcnulty, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:08:01 np0005596060 jolly_mcnulty[256661]: 167 167
Jan 26 13:08:01 np0005596060 nova_compute[247421]: 2026-01-26 18:08:01.316 247428 INFO nova.compute.manager [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Post operation of migration started#033[00m
Jan 26 13:08:01 np0005596060 systemd[1]: libpod-5d43f9d33f39a9fae30c48bea11a261e62d485ee3789710ee63e083fe5260699.scope: Deactivated successfully.
Jan 26 13:08:01 np0005596060 conmon[256661]: conmon 5d43f9d33f39a9fae30c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5d43f9d33f39a9fae30c48bea11a261e62d485ee3789710ee63e083fe5260699.scope/container/memory.events
Jan 26 13:08:01 np0005596060 podman[256641]: 2026-01-26 18:08:01.323434452 +0000 UTC m=+0.164082613 container died 5d43f9d33f39a9fae30c48bea11a261e62d485ee3789710ee63e083fe5260699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mcnulty, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 13:08:01 np0005596060 nova_compute[247421]: 2026-01-26 18:08:01.324 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.325 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0516cc55-93b8-4bf2-b595-d07702fa255b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0516cc55-93b8-4bf2-b595-d07702fa255b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.326 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d67cb3ff-8783-490a-b4f6-696262118f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.327 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-0516cc55-93b8-4bf2-b595-d07702fa255b
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/0516cc55-93b8-4bf2-b595-d07702fa255b.pid.haproxy
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 0516cc55-93b8-4bf2-b595-d07702fa255b
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.327 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'env', 'PROCESS_TAG=haproxy-0516cc55-93b8-4bf2-b595-d07702fa255b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0516cc55-93b8-4bf2-b595-d07702fa255b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:08:01 np0005596060 systemd[1]: var-lib-containers-storage-overlay-390d5ca0477158cd0aa731eaf7dc88deb921b19ed5dd6049b2bde1b8b20625d8-merged.mount: Deactivated successfully.
Jan 26 13:08:01 np0005596060 podman[256641]: 2026-01-26 18:08:01.359936793 +0000 UTC m=+0.200584964 container remove 5d43f9d33f39a9fae30c48bea11a261e62d485ee3789710ee63e083fe5260699 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:08:01 np0005596060 systemd[1]: libpod-conmon-5d43f9d33f39a9fae30c48bea11a261e62d485ee3789710ee63e083fe5260699.scope: Deactivated successfully.
Jan 26 13:08:01 np0005596060 podman[256691]: 2026-01-26 18:08:01.543481895 +0000 UTC m=+0.044643573 container create 1909c6b65438ac5abcd8f2b437a493ba273aeddc86237e326b78038db3e4768d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:08:01 np0005596060 systemd[1]: Started libpod-conmon-1909c6b65438ac5abcd8f2b437a493ba273aeddc86237e326b78038db3e4768d.scope.
Jan 26 13:08:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:08:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c82ec6081b739eb8fa1e95750c47ffd0942d8efaa894d24c831f67e082c8b0d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:08:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c82ec6081b739eb8fa1e95750c47ffd0942d8efaa894d24c831f67e082c8b0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:08:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c82ec6081b739eb8fa1e95750c47ffd0942d8efaa894d24c831f67e082c8b0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:08:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c82ec6081b739eb8fa1e95750c47ffd0942d8efaa894d24c831f67e082c8b0d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:08:01 np0005596060 podman[256691]: 2026-01-26 18:08:01.524028955 +0000 UTC m=+0.025190663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:08:01 np0005596060 podman[256691]: 2026-01-26 18:08:01.620255161 +0000 UTC m=+0.121416859 container init 1909c6b65438ac5abcd8f2b437a493ba273aeddc86237e326b78038db3e4768d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:08:01 np0005596060 podman[256691]: 2026-01-26 18:08:01.626600068 +0000 UTC m=+0.127761776 container start 1909c6b65438ac5abcd8f2b437a493ba273aeddc86237e326b78038db3e4768d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:08:01 np0005596060 podman[256691]: 2026-01-26 18:08:01.644483899 +0000 UTC m=+0.145645577 container attach 1909c6b65438ac5abcd8f2b437a493ba273aeddc86237e326b78038db3e4768d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 13:08:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:01.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:01 np0005596060 podman[256734]: 2026-01-26 18:08:01.742721965 +0000 UTC m=+0.074553552 container create 10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 13:08:01 np0005596060 systemd[1]: Started libpod-conmon-10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4.scope.
Jan 26 13:08:01 np0005596060 podman[256734]: 2026-01-26 18:08:01.69878946 +0000 UTC m=+0.030621047 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:08:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:08:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1493ff747d35994ace7394a1fe08102e85ca6f8fe76708c18a10fd116c695f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:08:01 np0005596060 podman[256734]: 2026-01-26 18:08:01.847682067 +0000 UTC m=+0.179513664 container init 10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:08:01 np0005596060 podman[256734]: 2026-01-26 18:08:01.854078995 +0000 UTC m=+0.185910572 container start 10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:08:01 np0005596060 neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b[256749]: [NOTICE]   (256753) : New worker (256755) forked
Jan 26 13:08:01 np0005596060 neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b[256749]: [NOTICE]   (256753) : Loading success.
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.918 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 8efebc34-f8eb-42e5-af94-78e84c0dcbba in datapath ebb9e0b4-8385-462a-84cc-87c6f72c0c65 unbound from our chassis#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.921 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebb9e0b4-8385-462a-84cc-87c6f72c0c65#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.936 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[13abf505-1f96-4e4e-ac3e-3074176e2862]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.937 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapebb9e0b4-81 in ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.938 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapebb9e0b4-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.938 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[34dbd4e3-38c2-4ac2-ad8c-645e22be2f24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.941 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[2787bcfb-ebf8-4f0b-9b2d-0988513fc0aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 167 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.963 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[7884b410-215f-40c3-b55e-4c3a4b10866a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:01.979 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e9512165-c931-4772-ae6d-5d90d3fd6548]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.015 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[c61d9152-582a-4d6b-8b7c-c95702fdfc24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 NetworkManager[48900]: <info>  [1769450882.0230] manager: (tapebb9e0b4-80): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.024 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ac22a9ff-9769-4ea0-9236-616e4a794910]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 systemd-udevd[256628]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.058 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[589d9583-7980-4dde-9e3d-e4238f9cf06e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.060 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[e2201ce1-e0cb-40f9-a2c7-e5e35b5a0635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 NetworkManager[48900]: <info>  [1769450882.0879] device (tapebb9e0b4-80): carrier: link connected
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.096 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[0621227b-2b78-4222-941a-181ee13b316c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.116 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[34dc162a-ecb6-4189-9248-de9f8ed114a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebb9e0b4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:af:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464666, 'reachable_time': 21369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256774, 'error': None, 'target': 'ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:02.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.138 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[4838c55b-7aaf-4299-8c9c-2bd281f08363]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe22:af9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464666, 'tstamp': 464666}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256775, 'error': None, 'target': 'ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.154 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[516231d0-44bb-481c-800c-a26692228e0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebb9e0b4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:22:af:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464666, 'reachable_time': 21369, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256776, 'error': None, 'target': 'ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.194 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[54324660-a6c8-4ff3-b5aa-de89dac85ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.281 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[4543f1c2-a11a-4966-80cb-db2c81769267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.282 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebb9e0b4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.283 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.283 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebb9e0b4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:08:02 np0005596060 nova_compute[247421]: 2026-01-26 18:08:02.288 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:02 np0005596060 NetworkManager[48900]: <info>  [1769450882.2892] manager: (tapebb9e0b4-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 26 13:08:02 np0005596060 kernel: tapebb9e0b4-80: entered promiscuous mode
Jan 26 13:08:02 np0005596060 nova_compute[247421]: 2026-01-26 18:08:02.293 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.294 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebb9e0b4-80, col_values=(('external_ids', {'iface-id': 'ec5ab65e-333c-4443-bd37-b74fa484479e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:08:02 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:02Z|00054|binding|INFO|Releasing lport ec5ab65e-333c-4443-bd37-b74fa484479e from this chassis (sb_readonly=0)
Jan 26 13:08:02 np0005596060 nova_compute[247421]: 2026-01-26 18:08:02.297 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:02 np0005596060 nova_compute[247421]: 2026-01-26 18:08:02.298 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.299 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ebb9e0b4-8385-462a-84cc-87c6f72c0c65.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ebb9e0b4-8385-462a-84cc-87c6f72c0c65.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:08:02 np0005596060 nova_compute[247421]: 2026-01-26 18:08:02.310 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.309 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[6674c0d8-76cc-4102-8532-c389f69d6d0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.312 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-ebb9e0b4-8385-462a-84cc-87c6f72c0c65
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/ebb9e0b4-8385-462a-84cc-87c6f72c0c65.pid.haproxy
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID ebb9e0b4-8385-462a-84cc-87c6f72c0c65
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:08:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:02.312 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65', 'env', 'PROCESS_TAG=haproxy-ebb9e0b4-8385-462a-84cc-87c6f72c0c65', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ebb9e0b4-8385-462a-84cc-87c6f72c0c65.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]: {
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:    "1": [
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:        {
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "devices": [
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "/dev/loop3"
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            ],
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "lv_name": "ceph_lv0",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "lv_size": "7511998464",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "name": "ceph_lv0",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "tags": {
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.cluster_name": "ceph",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.crush_device_class": "",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.encrypted": "0",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.osd_id": "1",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.type": "block",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:                "ceph.vdo": "0"
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            },
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "type": "block",
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:            "vg_name": "ceph_vg0"
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:        }
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]:    ]
Jan 26 13:08:02 np0005596060 nervous_noyce[256712]: }
Jan 26 13:08:02 np0005596060 systemd[1]: libpod-1909c6b65438ac5abcd8f2b437a493ba273aeddc86237e326b78038db3e4768d.scope: Deactivated successfully.
Jan 26 13:08:02 np0005596060 podman[256691]: 2026-01-26 18:08:02.470914245 +0000 UTC m=+0.972075943 container died 1909c6b65438ac5abcd8f2b437a493ba273aeddc86237e326b78038db3e4768d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 13:08:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2c82ec6081b739eb8fa1e95750c47ffd0942d8efaa894d24c831f67e082c8b0d-merged.mount: Deactivated successfully.
Jan 26 13:08:02 np0005596060 podman[256691]: 2026-01-26 18:08:02.541605941 +0000 UTC m=+1.042767619 container remove 1909c6b65438ac5abcd8f2b437a493ba273aeddc86237e326b78038db3e4768d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 26 13:08:02 np0005596060 systemd[1]: libpod-conmon-1909c6b65438ac5abcd8f2b437a493ba273aeddc86237e326b78038db3e4768d.scope: Deactivated successfully.
Jan 26 13:08:02 np0005596060 podman[256863]: 2026-01-26 18:08:02.769270532 +0000 UTC m=+0.057406308 container create 7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 13:08:02 np0005596060 systemd[1]: Started libpod-conmon-7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6.scope.
Jan 26 13:08:02 np0005596060 podman[256863]: 2026-01-26 18:08:02.744577922 +0000 UTC m=+0.032713718 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:08:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:08:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af08287eaebd49609f435f1cea3753e275846d56a19fdf30393e40fad7e30fea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:08:02 np0005596060 podman[256863]: 2026-01-26 18:08:02.863985241 +0000 UTC m=+0.152121017 container init 7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:08:02 np0005596060 podman[256863]: 2026-01-26 18:08:02.8764913 +0000 UTC m=+0.164627076 container start 7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 13:08:02 np0005596060 neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65[256909]: [NOTICE]   (256919) : New worker (256937) forked
Jan 26 13:08:02 np0005596060 neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65[256909]: [NOTICE]   (256919) : Loading success.
Jan 26 13:08:03 np0005596060 podman[256993]: 2026-01-26 18:08:03.307889092 +0000 UTC m=+0.043610188 container create f657accc3421ae0cfc93764b773aca444e8a3109430cb77874db156a130865f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:08:03 np0005596060 systemd[1]: Started libpod-conmon-f657accc3421ae0cfc93764b773aca444e8a3109430cb77874db156a130865f3.scope.
Jan 26 13:08:03 np0005596060 podman[256993]: 2026-01-26 18:08:03.289357714 +0000 UTC m=+0.025078830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:08:03 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:08:03 np0005596060 podman[256993]: 2026-01-26 18:08:03.419456957 +0000 UTC m=+0.155178103 container init f657accc3421ae0cfc93764b773aca444e8a3109430cb77874db156a130865f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:08:03 np0005596060 podman[256993]: 2026-01-26 18:08:03.428269175 +0000 UTC m=+0.163990291 container start f657accc3421ae0cfc93764b773aca444e8a3109430cb77874db156a130865f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 13:08:03 np0005596060 podman[256993]: 2026-01-26 18:08:03.432046048 +0000 UTC m=+0.167767194 container attach f657accc3421ae0cfc93764b773aca444e8a3109430cb77874db156a130865f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:08:03 np0005596060 gallant_ride[257009]: 167 167
Jan 26 13:08:03 np0005596060 systemd[1]: libpod-f657accc3421ae0cfc93764b773aca444e8a3109430cb77874db156a130865f3.scope: Deactivated successfully.
Jan 26 13:08:03 np0005596060 conmon[257009]: conmon f657accc3421ae0cfc93 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f657accc3421ae0cfc93764b773aca444e8a3109430cb77874db156a130865f3.scope/container/memory.events
Jan 26 13:08:03 np0005596060 podman[256993]: 2026-01-26 18:08:03.437944393 +0000 UTC m=+0.173665489 container died f657accc3421ae0cfc93764b773aca444e8a3109430cb77874db156a130865f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:08:03 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4ef2ea4dee109968edce8ea6efc230c681384301ee37e3c06151479a8958bfd0-merged.mount: Deactivated successfully.
Jan 26 13:08:03 np0005596060 podman[256993]: 2026-01-26 18:08:03.482395111 +0000 UTC m=+0.218116207 container remove f657accc3421ae0cfc93764b773aca444e8a3109430cb77874db156a130865f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:08:03 np0005596060 systemd[1]: libpod-conmon-f657accc3421ae0cfc93764b773aca444e8a3109430cb77874db156a130865f3.scope: Deactivated successfully.
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031574248557602828 of space, bias 1.0, pg target 0.9472274567280848 quantized to 32 (current 32)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:08:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:03.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.679819) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450883679920, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 354, "num_deletes": 256, "total_data_size": 221893, "memory_usage": 230552, "flush_reason": "Manual Compaction"}
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450883683583, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 220890, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21913, "largest_seqno": 22266, "table_properties": {"data_size": 218598, "index_size": 392, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5418, "raw_average_key_size": 17, "raw_value_size": 214005, "raw_average_value_size": 690, "num_data_blocks": 18, "num_entries": 310, "num_filter_entries": 310, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769450875, "oldest_key_time": 1769450875, "file_creation_time": 1769450883, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 3775 microseconds, and 1924 cpu microseconds.
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.683622) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 220890 bytes OK
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.683643) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.684856) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.684876) EVENT_LOG_v1 {"time_micros": 1769450883684869, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.684902) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 219447, prev total WAL file size 219447, number of live WAL files 2.
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.685415) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(215KB)], [50(7674KB)]
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450883685468, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 8079301, "oldest_snapshot_seqno": -1}
Jan 26 13:08:03 np0005596060 podman[257033]: 2026-01-26 18:08:03.703799628 +0000 UTC m=+0.056076686 container create ccdaced999fb3c97facffbdd49ed5a1b5a57d18b247628a7735060b67bf999b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4545 keys, 7944563 bytes, temperature: kUnknown
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450883744502, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 7944563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7913861, "index_size": 18240, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 116118, "raw_average_key_size": 25, "raw_value_size": 7831052, "raw_average_value_size": 1723, "num_data_blocks": 742, "num_entries": 4545, "num_filter_entries": 4545, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769450883, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.744876) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 7944563 bytes
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.746393) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.6 rd, 134.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.5 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(72.5) write-amplify(36.0) OK, records in: 5068, records dropped: 523 output_compression: NoCompression
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.746423) EVENT_LOG_v1 {"time_micros": 1769450883746410, "job": 26, "event": "compaction_finished", "compaction_time_micros": 59166, "compaction_time_cpu_micros": 26066, "output_level": 6, "num_output_files": 1, "total_output_size": 7944563, "num_input_records": 5068, "num_output_records": 4545, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450883746678, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769450883749105, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.685324) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.749230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.749240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.749242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.749244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:08:03 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:08:03.749245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:08:03 np0005596060 podman[257033]: 2026-01-26 18:08:03.68241064 +0000 UTC m=+0.034687718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:08:03 np0005596060 systemd[1]: Started libpod-conmon-ccdaced999fb3c97facffbdd49ed5a1b5a57d18b247628a7735060b67bf999b5.scope.
Jan 26 13:08:03 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:08:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f751c7456fb52609e68bee6b64732ff0a654b7d794a4cb8695c37bd4766b4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:08:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f751c7456fb52609e68bee6b64732ff0a654b7d794a4cb8695c37bd4766b4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:08:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f751c7456fb52609e68bee6b64732ff0a654b7d794a4cb8695c37bd4766b4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:08:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72f751c7456fb52609e68bee6b64732ff0a654b7d794a4cb8695c37bd4766b4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:08:03 np0005596060 podman[257033]: 2026-01-26 18:08:03.84400825 +0000 UTC m=+0.196285338 container init ccdaced999fb3c97facffbdd49ed5a1b5a57d18b247628a7735060b67bf999b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hugle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 13:08:03 np0005596060 podman[257033]: 2026-01-26 18:08:03.852649394 +0000 UTC m=+0.204926452 container start ccdaced999fb3c97facffbdd49ed5a1b5a57d18b247628a7735060b67bf999b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hugle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 13:08:03 np0005596060 podman[257033]: 2026-01-26 18:08:03.856401896 +0000 UTC m=+0.208678974 container attach ccdaced999fb3c97facffbdd49ed5a1b5a57d18b247628a7735060b67bf999b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hugle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:08:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 167 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 135 op/s
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.056 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "refresh_cache-e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.057 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquired lock "refresh_cache-e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.057 247428 DEBUG nova.network.neutron [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:08:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:04.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.702 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Updating instance_info_cache with network_info: [{"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:08:04 np0005596060 thirsty_hugle[257050]: {
Jan 26 13:08:04 np0005596060 thirsty_hugle[257050]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:08:04 np0005596060 thirsty_hugle[257050]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:08:04 np0005596060 thirsty_hugle[257050]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:08:04 np0005596060 thirsty_hugle[257050]:        "osd_id": 1,
Jan 26 13:08:04 np0005596060 thirsty_hugle[257050]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:08:04 np0005596060 thirsty_hugle[257050]:        "type": "bluestore"
Jan 26 13:08:04 np0005596060 thirsty_hugle[257050]:    }
Jan 26 13:08:04 np0005596060 thirsty_hugle[257050]: }
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.755 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Releasing lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.756 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.756 247428 DEBUG oslo_concurrency.lockutils [req-9be93288-3b0a-4909-9641-ae7b87708b75 req-1448dec0-0d42-4f0b-8ce3-ae8b475648f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.756 247428 DEBUG nova.network.neutron [req-9be93288-3b0a-4909-9641-ae7b87708b75 req-1448dec0-0d42-4f0b-8ce3-ae8b475648f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Refreshing network info cache for port 59c8d05c-d702-4701-8157-aa4f2da6736e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.758 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.758 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.759 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:04 np0005596060 nova_compute[247421]: 2026-01-26 18:08:04.759 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:04 np0005596060 systemd[1]: libpod-ccdaced999fb3c97facffbdd49ed5a1b5a57d18b247628a7735060b67bf999b5.scope: Deactivated successfully.
Jan 26 13:08:04 np0005596060 podman[257033]: 2026-01-26 18:08:04.762813158 +0000 UTC m=+1.115090256 container died ccdaced999fb3c97facffbdd49ed5a1b5a57d18b247628a7735060b67bf999b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hugle, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:08:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-72f751c7456fb52609e68bee6b64732ff0a654b7d794a4cb8695c37bd4766b4b-merged.mount: Deactivated successfully.
Jan 26 13:08:04 np0005596060 podman[257033]: 2026-01-26 18:08:04.826764697 +0000 UTC m=+1.179041775 container remove ccdaced999fb3c97facffbdd49ed5a1b5a57d18b247628a7735060b67bf999b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_hugle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:08:04 np0005596060 systemd[1]: libpod-conmon-ccdaced999fb3c97facffbdd49ed5a1b5a57d18b247628a7735060b67bf999b5.scope: Deactivated successfully.
Jan 26 13:08:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:08:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:08:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:08:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:08:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8e96d0ce-b62c-415c-a459-b83bd8879130 does not exist
Jan 26 13:08:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 18f7c4b3-8ea0-46cd-a116-1d502fc230d4 does not exist
Jan 26 13:08:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8c6e9fd8-f816-4a65-8b20-4986254fb659 does not exist
Jan 26 13:08:05 np0005596060 podman[257109]: 2026-01-26 18:08:05.351967566 +0000 UTC m=+0.087018870 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:08:05 np0005596060 podman[257110]: 2026-01-26 18:08:05.390954249 +0000 UTC m=+0.126001623 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 13:08:05 np0005596060 nova_compute[247421]: 2026-01-26 18:08:05.559 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:08:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:05.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:08:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:08:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:08:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 167 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 88 KiB/s wr, 89 op/s
Jan 26 13:08:06 np0005596060 nova_compute[247421]: 2026-01-26 18:08:06.052 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:08:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:06.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:08:07 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:07Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d7:77:d1 10.100.0.13
Jan 26 13:08:07 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:07Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d7:77:d1 10.100.0.13
Jan 26 13:08:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:07.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 197 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 152 op/s
Jan 26 13:08:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:08.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:08 np0005596060 nova_compute[247421]: 2026-01-26 18:08:08.270 247428 DEBUG nova.network.neutron [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Updating instance_info_cache with network_info: [{"id": "06538465-e309-4216-af1a-244565d3805b", "address": "fa:16:3e:35:48:ae", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06538465-e3", "ovs_interfaceid": "06538465-e309-4216-af1a-244565d3805b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:08:08 np0005596060 nova_compute[247421]: 2026-01-26 18:08:08.297 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Releasing lock "refresh_cache-e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:08:08 np0005596060 nova_compute[247421]: 2026-01-26 18:08:08.327 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:08:08 np0005596060 nova_compute[247421]: 2026-01-26 18:08:08.328 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:08:08 np0005596060 nova_compute[247421]: 2026-01-26 18:08:08.328 247428 DEBUG oslo_concurrency.lockutils [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:08:08 np0005596060 nova_compute[247421]: 2026-01-26 18:08:08.338 247428 INFO nova.virt.libvirt.driver [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Jan 26 13:08:08 np0005596060 virtqemud[246749]: Domain id=4 name='instance-00000005' uuid=e40120ae-eb4e-4f0b-9d8f-f0210de78c4f is tainted: custom-monitor
Jan 26 13:08:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:09 np0005596060 nova_compute[247421]: 2026-01-26 18:08:09.271 247428 DEBUG nova.network.neutron [req-9be93288-3b0a-4909-9641-ae7b87708b75 req-1448dec0-0d42-4f0b-8ce3-ae8b475648f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Updated VIF entry in instance network info cache for port 59c8d05c-d702-4701-8157-aa4f2da6736e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:08:09 np0005596060 nova_compute[247421]: 2026-01-26 18:08:09.272 247428 DEBUG nova.network.neutron [req-9be93288-3b0a-4909-9641-ae7b87708b75 req-1448dec0-0d42-4f0b-8ce3-ae8b475648f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Updating instance_info_cache with network_info: [{"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:08:09 np0005596060 nova_compute[247421]: 2026-01-26 18:08:09.290 247428 DEBUG oslo_concurrency.lockutils [req-9be93288-3b0a-4909-9641-ae7b87708b75 req-1448dec0-0d42-4f0b-8ce3-ae8b475648f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:08:09 np0005596060 nova_compute[247421]: 2026-01-26 18:08:09.350 247428 INFO nova.virt.libvirt.driver [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Jan 26 13:08:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:09.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 197 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 317 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 13:08:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:10.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:10 np0005596060 nova_compute[247421]: 2026-01-26 18:08:10.358 247428 INFO nova.virt.libvirt.driver [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Jan 26 13:08:10 np0005596060 nova_compute[247421]: 2026-01-26 18:08:10.366 247428 DEBUG nova.compute.manager [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:08:10 np0005596060 nova_compute[247421]: 2026-01-26 18:08:10.406 247428 DEBUG nova.objects.instance [None req-335d76ca-73c4-4e0c-8853-855fc0bca693 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 26 13:08:10 np0005596060 nova_compute[247421]: 2026-01-26 18:08:10.561 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:11 np0005596060 nova_compute[247421]: 2026-01-26 18:08:11.056 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:11.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 13:08:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:12.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:13.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 330 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Jan 26 13:08:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:08:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:08:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:08:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:08:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:08:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:08:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:14.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:14.740 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:08:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:14.741 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:08:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:14.742 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:08:15 np0005596060 nova_compute[247421]: 2026-01-26 18:08:15.564 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:15.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 330 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Jan 26 13:08:16 np0005596060 nova_compute[247421]: 2026-01-26 18:08:16.058 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:16.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:16 np0005596060 nova_compute[247421]: 2026-01-26 18:08:16.757 247428 DEBUG oslo_concurrency.lockutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Acquiring lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:08:16 np0005596060 nova_compute[247421]: 2026-01-26 18:08:16.758 247428 DEBUG oslo_concurrency.lockutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:08:16 np0005596060 nova_compute[247421]: 2026-01-26 18:08:16.758 247428 DEBUG oslo_concurrency.lockutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Acquiring lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:08:16 np0005596060 nova_compute[247421]: 2026-01-26 18:08:16.759 247428 DEBUG oslo_concurrency.lockutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:08:16 np0005596060 nova_compute[247421]: 2026-01-26 18:08:16.760 247428 DEBUG oslo_concurrency.lockutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:08:16 np0005596060 nova_compute[247421]: 2026-01-26 18:08:16.761 247428 INFO nova.compute.manager [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Terminating instance#033[00m
Jan 26 13:08:16 np0005596060 nova_compute[247421]: 2026-01-26 18:08:16.763 247428 DEBUG nova.compute.manager [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:08:17 np0005596060 kernel: tap59c8d05c-d7 (unregistering): left promiscuous mode
Jan 26 13:08:17 np0005596060 NetworkManager[48900]: <info>  [1769450897.0550] device (tap59c8d05c-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:08:17 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:17Z|00055|binding|INFO|Releasing lport 59c8d05c-d702-4701-8157-aa4f2da6736e from this chassis (sb_readonly=0)
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.066 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:17 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:17Z|00056|binding|INFO|Setting lport 59c8d05c-d702-4701-8157-aa4f2da6736e down in Southbound
Jan 26 13:08:17 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:17Z|00057|binding|INFO|Removing iface tap59c8d05c-d7 ovn-installed in OVS
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.071 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.078 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:77:d1 10.100.0.13'], port_security=['fa:16:3e:d7:77:d1 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8c19a6a9-b54e-4bc8-a58b-a6186c2d048b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1ef6f1a6-165e-4b17-8f40-6c5a006288c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a6a7c4658204ff7b58cbc0fec17a157', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc23a8c7-7183-4d87-9ed1-326712e58ede', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fecc7817-277f-455e-9d40-c1c58433f73c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=59c8d05c-d702-4701-8157-aa4f2da6736e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.079 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 59c8d05c-d702-4701-8157-aa4f2da6736e in datapath 1ef6f1a6-165e-4b17-8f40-6c5a006288c4 unbound from our chassis#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.080 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1ef6f1a6-165e-4b17-8f40-6c5a006288c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.082 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[baa77e51-dafa-41dd-881f-58d6361b8264]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.083 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4 namespace which is not needed anymore#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.090 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:17 np0005596060 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 26 13:08:17 np0005596060 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 14.113s CPU time.
Jan 26 13:08:17 np0005596060 systemd-machined[213879]: Machine qemu-3-instance-00000006 terminated.
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.184 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.190 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.202 247428 INFO nova.virt.libvirt.driver [-] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Instance destroyed successfully.#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.203 247428 DEBUG nova.objects.instance [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lazy-loading 'resources' on Instance uuid 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.228 247428 DEBUG nova.virt.libvirt.vif [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:07:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-379951958',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-379951958',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(16),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-379951958',id=6,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=16,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLrpjwE+Ecy4AAiCXUTJSmK61q8NybCDeA5k2+vIQ8wCiO+ptwfDNsYzsnUo27lqsZd2ACx5xgmi4WnnFmM7jeMejr1yR3v6fQC/AE3qGsGMdB3DcNq1saY+RRjofMNKqw==',key_name='tempest-keypair-231640517',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:07:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4a6a7c4658204ff7b58cbc0fec17a157',ramdisk_id='',reservation_id='r-2qjhyut6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-255904350',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-255904350-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:07:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='4824e9871dbc4b4c84dffadc67ceb442',uuid=8c19a6a9-b54e-4bc8-a58b-a6186c2d048b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.228 247428 DEBUG nova.network.os_vif_util [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Converting VIF {"id": "59c8d05c-d702-4701-8157-aa4f2da6736e", "address": "fa:16:3e:d7:77:d1", "network": {"id": "1ef6f1a6-165e-4b17-8f40-6c5a006288c4", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1950110523-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a6a7c4658204ff7b58cbc0fec17a157", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap59c8d05c-d7", "ovs_interfaceid": "59c8d05c-d702-4701-8157-aa4f2da6736e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.229 247428 DEBUG nova.network.os_vif_util [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d7:77:d1,bridge_name='br-int',has_traffic_filtering=True,id=59c8d05c-d702-4701-8157-aa4f2da6736e,network=Network(1ef6f1a6-165e-4b17-8f40-6c5a006288c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59c8d05c-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.230 247428 DEBUG os_vif [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:77:d1,bridge_name='br-int',has_traffic_filtering=True,id=59c8d05c-d702-4701-8157-aa4f2da6736e,network=Network(1ef6f1a6-165e-4b17-8f40-6c5a006288c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59c8d05c-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.232 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.232 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap59c8d05c-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.235 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.239 247428 INFO os_vif [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d7:77:d1,bridge_name='br-int',has_traffic_filtering=True,id=59c8d05c-d702-4701-8157-aa4f2da6736e,network=Network(1ef6f1a6-165e-4b17-8f40-6c5a006288c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap59c8d05c-d7')#033[00m
Jan 26 13:08:17 np0005596060 neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4[255851]: [NOTICE]   (255875) : haproxy version is 2.8.14-c23fe91
Jan 26 13:08:17 np0005596060 neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4[255851]: [NOTICE]   (255875) : path to executable is /usr/sbin/haproxy
Jan 26 13:08:17 np0005596060 neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4[255851]: [ALERT]    (255875) : Current worker (255882) exited with code 143 (Terminated)
Jan 26 13:08:17 np0005596060 neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4[255851]: [WARNING]  (255875) : All workers exited. Exiting... (0)
Jan 26 13:08:17 np0005596060 systemd[1]: libpod-7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb.scope: Deactivated successfully.
Jan 26 13:08:17 np0005596060 conmon[255851]: conmon 7e27888d9d1e0d304932 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb.scope/container/memory.events
Jan 26 13:08:17 np0005596060 podman[257259]: 2026-01-26 18:08:17.255194237 +0000 UTC m=+0.059072900 container died 7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:08:17 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb-userdata-shm.mount: Deactivated successfully.
Jan 26 13:08:17 np0005596060 systemd[1]: var-lib-containers-storage-overlay-09c40ec4c03ff285927843130cdc4e42e50d14618a446bfcda30a8903dcd833a-merged.mount: Deactivated successfully.
Jan 26 13:08:17 np0005596060 podman[257259]: 2026-01-26 18:08:17.291732049 +0000 UTC m=+0.095610712 container cleanup 7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 13:08:17 np0005596060 systemd[1]: libpod-conmon-7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb.scope: Deactivated successfully.
Jan 26 13:08:17 np0005596060 podman[257311]: 2026-01-26 18:08:17.363900221 +0000 UTC m=+0.046527670 container remove 7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.370 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b10e0b1e-34fd-4ee7-b7a1-84270b2dee3f]: (4, ('Mon Jan 26 06:08:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4 (7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb)\n7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb\nMon Jan 26 06:08:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4 (7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb)\n7e27888d9d1e0d304932c941bba44336d4f391dffbe7d6e0920c1223a44706bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.372 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e7be81a6-2e20-4dec-b410-03e0feaa35f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.373 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ef6f1a6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.375 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:17 np0005596060 kernel: tap1ef6f1a6-10: left promiscuous mode
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.393 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.397 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a83bccb4-4db3-4994-abc1-46bafbef3400]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.419 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[769fbe51-27ab-489d-b96a-d80fe184c2ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.420 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[2107b39a-09e2-40e2-850a-d6fe8b104d09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.443 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[eed499d9-9ebb-435f-a486-7ef4672ddd25]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463615, 'reachable_time': 34502, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257327, 'error': None, 'target': 'ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.448 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1ef6f1a6-165e-4b17-8f40-6c5a006288c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:08:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:17.449 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[677efcf3-e071-4dac-9e7b-0cb6aafe894e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:08:17 np0005596060 systemd[1]: run-netns-ovnmeta\x2d1ef6f1a6\x2d165e\x2d4b17\x2d8f40\x2d6c5a006288c4.mount: Deactivated successfully.
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.614 247428 DEBUG nova.compute.manager [req-3c778d76-e810-4674-bbf3-0c75fd38ea9b req-85e50cfa-43f3-4cc0-be85-c04424db264e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Received event network-vif-unplugged-59c8d05c-d702-4701-8157-aa4f2da6736e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.615 247428 DEBUG oslo_concurrency.lockutils [req-3c778d76-e810-4674-bbf3-0c75fd38ea9b req-85e50cfa-43f3-4cc0-be85-c04424db264e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.615 247428 DEBUG oslo_concurrency.lockutils [req-3c778d76-e810-4674-bbf3-0c75fd38ea9b req-85e50cfa-43f3-4cc0-be85-c04424db264e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.615 247428 DEBUG oslo_concurrency.lockutils [req-3c778d76-e810-4674-bbf3-0c75fd38ea9b req-85e50cfa-43f3-4cc0-be85-c04424db264e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.616 247428 DEBUG nova.compute.manager [req-3c778d76-e810-4674-bbf3-0c75fd38ea9b req-85e50cfa-43f3-4cc0-be85-c04424db264e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] No waiting events found dispatching network-vif-unplugged-59c8d05c-d702-4701-8157-aa4f2da6736e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.616 247428 DEBUG nova.compute.manager [req-3c778d76-e810-4674-bbf3-0c75fd38ea9b req-85e50cfa-43f3-4cc0-be85-c04424db264e 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Received event network-vif-unplugged-59c8d05c-d702-4701-8157-aa4f2da6736e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:08:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:17.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.834 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.947 247428 INFO nova.virt.libvirt.driver [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Deleting instance files /var/lib/nova/instances/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_del#033[00m
Jan 26 13:08:17 np0005596060 nova_compute[247421]: 2026-01-26 18:08:17.949 247428 INFO nova.virt.libvirt.driver [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Deletion of /var/lib/nova/instances/8c19a6a9-b54e-4bc8-a58b-a6186c2d048b_del complete#033[00m
Jan 26 13:08:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Jan 26 13:08:18 np0005596060 nova_compute[247421]: 2026-01-26 18:08:18.044 247428 INFO nova.compute.manager [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Took 1.28 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:08:18 np0005596060 nova_compute[247421]: 2026-01-26 18:08:18.044 247428 DEBUG oslo.service.loopingcall [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:08:18 np0005596060 nova_compute[247421]: 2026-01-26 18:08:18.045 247428 DEBUG nova.compute.manager [-] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:08:18 np0005596060 nova_compute[247421]: 2026-01-26 18:08:18.045 247428 DEBUG nova.network.neutron [-] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:08:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:18.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:19.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 42 KiB/s wr, 4 op/s
Jan 26 13:08:19 np0005596060 nova_compute[247421]: 2026-01-26 18:08:19.984 247428 DEBUG nova.compute.manager [req-ab7f9e50-e9f2-4b6a-9754-f31d9539c793 req-f3d2f6c3-0825-4f94-9aec-783616bec337 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Received event network-vif-plugged-59c8d05c-d702-4701-8157-aa4f2da6736e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:08:19 np0005596060 nova_compute[247421]: 2026-01-26 18:08:19.985 247428 DEBUG oslo_concurrency.lockutils [req-ab7f9e50-e9f2-4b6a-9754-f31d9539c793 req-f3d2f6c3-0825-4f94-9aec-783616bec337 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:08:19 np0005596060 nova_compute[247421]: 2026-01-26 18:08:19.985 247428 DEBUG oslo_concurrency.lockutils [req-ab7f9e50-e9f2-4b6a-9754-f31d9539c793 req-f3d2f6c3-0825-4f94-9aec-783616bec337 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:08:19 np0005596060 nova_compute[247421]: 2026-01-26 18:08:19.985 247428 DEBUG oslo_concurrency.lockutils [req-ab7f9e50-e9f2-4b6a-9754-f31d9539c793 req-f3d2f6c3-0825-4f94-9aec-783616bec337 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:08:19 np0005596060 nova_compute[247421]: 2026-01-26 18:08:19.985 247428 DEBUG nova.compute.manager [req-ab7f9e50-e9f2-4b6a-9754-f31d9539c793 req-f3d2f6c3-0825-4f94-9aec-783616bec337 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] No waiting events found dispatching network-vif-plugged-59c8d05c-d702-4701-8157-aa4f2da6736e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:08:19 np0005596060 nova_compute[247421]: 2026-01-26 18:08:19.986 247428 WARNING nova.compute.manager [req-ab7f9e50-e9f2-4b6a-9754-f31d9539c793 req-f3d2f6c3-0825-4f94-9aec-783616bec337 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Received unexpected event network-vif-plugged-59c8d05c-d702-4701-8157-aa4f2da6736e for instance with vm_state active and task_state deleting.#033[00m
Jan 26 13:08:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:20.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:20 np0005596060 nova_compute[247421]: 2026-01-26 18:08:20.293 247428 DEBUG nova.network.neutron [-] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:08:20 np0005596060 nova_compute[247421]: 2026-01-26 18:08:20.330 247428 INFO nova.compute.manager [-] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Took 2.29 seconds to deallocate network for instance.#033[00m
Jan 26 13:08:20 np0005596060 nova_compute[247421]: 2026-01-26 18:08:20.448 247428 DEBUG oslo_concurrency.lockutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:08:20 np0005596060 nova_compute[247421]: 2026-01-26 18:08:20.449 247428 DEBUG oslo_concurrency.lockutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:08:20 np0005596060 nova_compute[247421]: 2026-01-26 18:08:20.568 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:20 np0005596060 nova_compute[247421]: 2026-01-26 18:08:20.585 247428 DEBUG oslo_concurrency.processutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:08:20 np0005596060 nova_compute[247421]: 2026-01-26 18:08:20.853 247428 DEBUG nova.compute.manager [req-63baa0f4-6073-4ccc-8fc4-ebd11bf71f16 req-41f5d2c7-cc11-4956-9aba-fe2d7c08b8f3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Received event network-vif-deleted-59c8d05c-d702-4701-8157-aa4f2da6736e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:08:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:08:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133346469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:08:21 np0005596060 nova_compute[247421]: 2026-01-26 18:08:21.136 247428 DEBUG oslo_concurrency.processutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:08:21 np0005596060 nova_compute[247421]: 2026-01-26 18:08:21.141 247428 DEBUG nova.compute.provider_tree [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:08:21 np0005596060 nova_compute[247421]: 2026-01-26 18:08:21.168 247428 DEBUG nova.scheduler.client.report [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:08:21 np0005596060 nova_compute[247421]: 2026-01-26 18:08:21.218 247428 DEBUG oslo_concurrency.lockutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:08:21 np0005596060 nova_compute[247421]: 2026-01-26 18:08:21.334 247428 INFO nova.scheduler.client.report [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Deleted allocations for instance 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b#033[00m
Jan 26 13:08:21 np0005596060 nova_compute[247421]: 2026-01-26 18:08:21.419 247428 DEBUG oslo_concurrency.lockutils [None req-8823ff29-0519-48f8-a1b7-c03ed93328d8 4824e9871dbc4b4c84dffadc67ceb442 4a6a7c4658204ff7b58cbc0fec17a157 - - default default] Lock "8c19a6a9-b54e-4bc8-a58b-a6186c2d048b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:08:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:21.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 180 MiB data, 338 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 42 KiB/s wr, 15 op/s
Jan 26 13:08:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:22.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:22 np0005596060 nova_compute[247421]: 2026-01-26 18:08:22.236 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:23 np0005596060 nova_compute[247421]: 2026-01-26 18:08:23.162 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:23.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 28 op/s
Jan 26 13:08:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:24.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:25 np0005596060 nova_compute[247421]: 2026-01-26 18:08:25.570 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:25.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 2.2 KiB/s wr, 28 op/s
Jan 26 13:08:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:26.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:27 np0005596060 nova_compute[247421]: 2026-01-26 18:08:27.241 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:27.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 KiB/s wr, 35 op/s
Jan 26 13:08:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:28.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:29.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 KiB/s wr, 33 op/s
Jan 26 13:08:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:30.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:30 np0005596060 nova_compute[247421]: 2026-01-26 18:08:30.571 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:31 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:31Z|00058|binding|INFO|Releasing lport 46cfbba6-430a-495c-9d6a-60cf58c877d3 from this chassis (sb_readonly=0)
Jan 26 13:08:31 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:31Z|00059|binding|INFO|Releasing lport ec5ab65e-333c-4443-bd37-b74fa484479e from this chassis (sb_readonly=0)
Jan 26 13:08:31 np0005596060 nova_compute[247421]: 2026-01-26 18:08:31.296 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:31 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:31Z|00060|binding|INFO|Releasing lport 46cfbba6-430a-495c-9d6a-60cf58c877d3 from this chassis (sb_readonly=0)
Jan 26 13:08:31 np0005596060 ovn_controller[148842]: 2026-01-26T18:08:31Z|00061|binding|INFO|Releasing lport ec5ab65e-333c-4443-bd37-b74fa484479e from this chassis (sb_readonly=0)
Jan 26 13:08:31 np0005596060 nova_compute[247421]: 2026-01-26 18:08:31.529 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:31.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 121 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.2 KiB/s wr, 35 op/s
Jan 26 13:08:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:32.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:32 np0005596060 nova_compute[247421]: 2026-01-26 18:08:32.201 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769450897.199532, 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:08:32 np0005596060 nova_compute[247421]: 2026-01-26 18:08:32.201 247428 INFO nova.compute.manager [-] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:08:32 np0005596060 nova_compute[247421]: 2026-01-26 18:08:32.229 247428 DEBUG nova.compute.manager [None req-8b676478-cd77-4234-97a4-350102a95583 - - - - - -] [instance: 8c19a6a9-b54e-4bc8-a58b-a6186c2d048b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:08:32 np0005596060 nova_compute[247421]: 2026-01-26 18:08:32.244 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:33.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 143 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 39 op/s
Jan 26 13:08:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:34.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:35 np0005596060 nova_compute[247421]: 2026-01-26 18:08:35.573 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:35.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:35 np0005596060 podman[257411]: 2026-01-26 18:08:35.80581514 +0000 UTC m=+0.056940928 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 26 13:08:35 np0005596060 podman[257412]: 2026-01-26 18:08:35.839941362 +0000 UTC m=+0.091908030 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 13:08:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 143 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 23 op/s
Jan 26 13:08:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:36.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:37 np0005596060 nova_compute[247421]: 2026-01-26 18:08:37.248 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:37.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 26 13:08:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:38.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:39.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 13:08:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:40.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:40 np0005596060 nova_compute[247421]: 2026-01-26 18:08:40.574 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:41.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 13:08:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:42.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:42 np0005596060 nova_compute[247421]: 2026-01-26 18:08:42.251 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:43.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:08:44
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'images', 'volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'vms', '.rgw.root', '.mgr']
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:08:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:44.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:08:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:08:45 np0005596060 nova_compute[247421]: 2026-01-26 18:08:45.577 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:45.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 667 KiB/s wr, 18 op/s
Jan 26 13:08:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:46.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:47 np0005596060 nova_compute[247421]: 2026-01-26 18:08:47.307 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:47.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 675 KiB/s wr, 19 op/s
Jan 26 13:08:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:48.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:08:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:49.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:08:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Jan 26 13:08:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:50.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:50 np0005596060 nova_compute[247421]: 2026-01-26 18:08:50.579 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:51.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Jan 26 13:08:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:52.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:52 np0005596060 nova_compute[247421]: 2026-01-26 18:08:52.310 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:52.365 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:08:52 np0005596060 nova_compute[247421]: 2026-01-26 18:08:52.365 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:08:52.367 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:08:53 np0005596060 nova_compute[247421]: 2026-01-26 18:08:53.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:53 np0005596060 nova_compute[247421]: 2026-01-26 18:08:53.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:53 np0005596060 nova_compute[247421]: 2026-01-26 18:08:53.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:53 np0005596060 nova_compute[247421]: 2026-01-26 18:08:53.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:08:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:53.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Jan 26 13:08:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:54.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:08:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5122 writes, 22K keys, 5122 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 5122 writes, 5122 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1423 writes, 6437 keys, 1423 commit groups, 1.0 writes per commit group, ingest: 9.77 MB, 0.02 MB/s#012Interval WAL: 1423 writes, 1423 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.9      2.67              0.11        13    0.205       0      0       0.0       0.0#012  L6      1/0    7.58 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4     52.1     42.8      2.32              0.34        12    0.193     56K   6434       0.0       0.0#012 Sum      1/0    7.58 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     24.2     25.8      4.99              0.45        25    0.199     56K   6434       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.2     40.6     40.6      1.42              0.21        12    0.118     30K   3066       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     52.1     42.8      2.32              0.34        12    0.193     56K   6434       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.9      2.66              0.11        12    0.222       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.028, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.13 GB write, 0.07 MB/s write, 0.12 GB read, 0.07 MB/s read, 5.0 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5652937211f0#2 capacity: 304.00 MB usage: 10.54 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000136 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(600,10.08 MB,3.31445%) FilterBlock(26,164.67 KB,0.0528988%) IndexBlock(26,311.20 KB,0.0999702%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 26 13:08:55 np0005596060 nova_compute[247421]: 2026-01-26 18:08:55.581 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:55 np0005596060 nova_compute[247421]: 2026-01-26 18:08:55.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:55 np0005596060 nova_compute[247421]: 2026-01-26 18:08:55.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:08:55 np0005596060 nova_compute[247421]: 2026-01-26 18:08:55.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:08:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:55.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:55 np0005596060 nova_compute[247421]: 2026-01-26 18:08:55.933 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "refresh_cache-e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:08:55 np0005596060 nova_compute[247421]: 2026-01-26 18:08:55.933 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquired lock "refresh_cache-e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:08:55 np0005596060 nova_compute[247421]: 2026-01-26 18:08:55.934 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 26 13:08:55 np0005596060 nova_compute[247421]: 2026-01-26 18:08:55.934 247428 DEBUG nova.objects.instance [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lazy-loading 'info_cache' on Instance uuid e40120ae-eb4e-4f0b-9d8f-f0210de78c4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:08:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Jan 26 13:08:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:56.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:57 np0005596060 nova_compute[247421]: 2026-01-26 18:08:57.312 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:08:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:57.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 7.2 KiB/s rd, 21 KiB/s wr, 11 op/s
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.169 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Updating instance_info_cache with network_info: [{"id": "06538465-e309-4216-af1a-244565d3805b", "address": "fa:16:3e:35:48:ae", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06538465-e3", "ovs_interfaceid": "06538465-e309-4216-af1a-244565d3805b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.195 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Releasing lock "refresh_cache-e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.195 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.196 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.196 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.196 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.196 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.196 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.218 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.218 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.218 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.219 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.219 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:08:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:08:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:08:58.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:08:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:08:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020469485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.675 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:08:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.774 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:08:58 np0005596060 nova_compute[247421]: 2026-01-26 18:08:58.774 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.010 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.011 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4676MB free_disk=20.94263458251953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.012 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.012 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.094 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance e40120ae-eb4e-4f0b-9d8f-f0210de78c4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.095 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.095 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.145 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:08:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:08:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3229913875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.639 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.645 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.661 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.689 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:08:59 np0005596060 nova_compute[247421]: 2026-01-26 18:08:59.689 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:08:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:08:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:08:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:08:59.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:08:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 26 13:09:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:00.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:00 np0005596060 nova_compute[247421]: 2026-01-26 18:09:00.583 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:00 np0005596060 nova_compute[247421]: 2026-01-26 18:09:00.919 247428 DEBUG nova.virt.libvirt.driver [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Creating tmpfile /var/lib/nova/instances/tmp3ouej46o to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Jan 26 13:09:00 np0005596060 nova_compute[247421]: 2026-01-26 18:09:00.921 247428 DEBUG nova.compute.manager [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3ouej46o',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Jan 26 13:09:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:01.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 357 KiB/s rd, 12 KiB/s wr, 22 op/s
Jan 26 13:09:02 np0005596060 nova_compute[247421]: 2026-01-26 18:09:02.007 247428 DEBUG nova.compute.manager [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3ouej46o',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4c4b2733-13a7-49fe-bbfb-f3e063298716',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Jan 26 13:09:02 np0005596060 nova_compute[247421]: 2026-01-26 18:09:02.045 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "refresh_cache-4c4b2733-13a7-49fe-bbfb-f3e063298716" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:09:02 np0005596060 nova_compute[247421]: 2026-01-26 18:09:02.046 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquired lock "refresh_cache-4c4b2733-13a7-49fe-bbfb-f3e063298716" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:09:02 np0005596060 nova_compute[247421]: 2026-01-26 18:09:02.046 247428 DEBUG nova.network.neutron [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:09:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:02.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:02 np0005596060 nova_compute[247421]: 2026-01-26 18:09:02.316 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:02.370 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021748644844593336 of space, bias 1.0, pg target 0.6524593453378001 quantized to 32 (current 32)
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:09:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:09:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:03.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:09:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:09:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:04.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:04 np0005596060 nova_compute[247421]: 2026-01-26 18:09:04.836 247428 DEBUG nova.network.neutron [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Updating instance_info_cache with network_info: [{"id": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "address": "fa:16:3e:89:24:36", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3bd4b07-ea", "ovs_interfaceid": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:09:04 np0005596060 nova_compute[247421]: 2026-01-26 18:09:04.863 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Releasing lock "refresh_cache-4c4b2733-13a7-49fe-bbfb-f3e063298716" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:09:04 np0005596060 nova_compute[247421]: 2026-01-26 18:09:04.866 247428 DEBUG os_brick.utils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 26 13:09:04 np0005596060 nova_compute[247421]: 2026-01-26 18:09:04.869 247428 INFO oslo.privsep.daemon [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpqdg3u8ip/privsep.sock']#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.584 247428 INFO oslo.privsep.daemon [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.588 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.460 257571 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.463 257571 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.465 257571 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.465 257571 INFO oslo.privsep.daemon [-] privsep daemon running as pid 257571#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.593 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[62248780-044b-4d26-8ea1-fada81e616da]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.753 257571 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:05.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.782 257571 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.782 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[915e2cf9-3f37-40d1-9f91-b9d5558cd20b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.784 257571 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.796 257571 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.796 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[7cadbea7-8a6d-4543-8508-6a7dbb7d05e2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14cb718ec160', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.799 257571 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.812 257571 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.813 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[788a1fe8-2878-4070-b284-6a30266d6aff]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.815 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[c24ab3cf-ad1c-468a-84df-6da4cd3edbe4]: (4, 'd27b7a41-30de-40e4-9f10-b4e4f5902919') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.816 247428 DEBUG oslo_concurrency.processutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.843 247428 DEBUG oslo_concurrency.processutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.849 247428 DEBUG os_brick.initiator.connectors.lightos [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.851 247428 DEBUG os_brick.initiator.connectors.lightos [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.852 247428 DEBUG os_brick.initiator.connectors.lightos [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 26 13:09:05 np0005596060 nova_compute[247421]: 2026-01-26 18:09:05.852 247428 DEBUG os_brick.utils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] <== get_connector_properties: return (985ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14cb718ec160', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'd27b7a41-30de-40e4-9f10-b4e4f5902919', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 26 13:09:05 np0005596060 podman[257655]: 2026-01-26 18:09:05.951581967 +0000 UTC m=+0.057737997 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:09:05 np0005596060 podman[257656]: 2026-01-26 18:09:05.985022353 +0000 UTC m=+0.085525963 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 13:09:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:09:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:09:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:06.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:09:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:09:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:09:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:09:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:09:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:09:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:09:07 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7c2e47f1-a64d-4fc6-b78a-73ddd4067030 does not exist
Jan 26 13:09:07 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev fb5b3ce9-d980-4c67-bfd3-1ddee5e147d3 does not exist
Jan 26 13:09:07 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 52d4032e-1869-4192-a170-fbc7d2cd715c does not exist
Jan 26 13:09:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:09:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:09:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:09:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:09:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:09:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.319 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.493 247428 DEBUG nova.virt.libvirt.driver [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3ouej46o',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4c4b2733-13a7-49fe-bbfb-f3e063298716',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b5b60a57-95c9-48f2-a72a-66b14f738be8='5025e74b-c2b1-4272-a524-e7eeb678c73d'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.494 247428 DEBUG nova.virt.libvirt.driver [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Creating instance directory: /var/lib/nova/instances/4c4b2733-13a7-49fe-bbfb-f3e063298716 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.494 247428 DEBUG nova.virt.libvirt.driver [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Ensure instance console log exists: /var/lib/nova/instances/4c4b2733-13a7-49fe-bbfb-f3e063298716/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.495 247428 DEBUG nova.virt.libvirt.driver [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.495 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.495 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.496 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.503 247428 DEBUG nova.virt.libvirt.driver [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.505 247428 DEBUG nova.virt.libvirt.vif [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:08:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-634605113',display_name='tempest-LiveMigrationTest-server-634605113',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-634605113',id=7,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:08:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b1f2cad350784d7eae39fc23fb032500',ramdisk_id='',reservation_id='r-8pp60248',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-877386369',owner_user_name='tempest-LiveMigrationTest-877386369-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:08:58Z,user_data=None,user_id='9e3f505042e7463683259f02e8e59eca',uuid=4c4b2733-13a7-49fe-bbfb-f3e063298716,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "address": "fa:16:3e:89:24:36", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapc3bd4b07-ea", "ovs_interfaceid": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.505 247428 DEBUG nova.network.os_vif_util [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Converting VIF {"id": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "address": "fa:16:3e:89:24:36", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapc3bd4b07-ea", "ovs_interfaceid": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.507 247428 DEBUG nova.network.os_vif_util [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:24:36,bridge_name='br-int',has_traffic_filtering=True,id=c3bd4b07-ea7b-40da-8a33-0ac219177512,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3bd4b07-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.507 247428 DEBUG os_vif [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:24:36,bridge_name='br-int',has_traffic_filtering=True,id=c3bd4b07-ea7b-40da-8a33-0ac219177512,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3bd4b07-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.508 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.508 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.509 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.512 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.512 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3bd4b07-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.513 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc3bd4b07-ea, col_values=(('external_ids', {'iface-id': 'c3bd4b07-ea7b-40da-8a33-0ac219177512', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:24:36', 'vm-uuid': '4c4b2733-13a7-49fe-bbfb-f3e063298716'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:07 np0005596060 NetworkManager[48900]: <info>  [1769450947.5495] manager: (tapc3bd4b07-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.548 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.551 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.558 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.559 247428 INFO os_vif [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:24:36,bridge_name='br-int',has_traffic_filtering=True,id=c3bd4b07-ea7b-40da-8a33-0ac219177512,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3bd4b07-ea')#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.562 247428 DEBUG nova.virt.libvirt.driver [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Jan 26 13:09:07 np0005596060 nova_compute[247421]: 2026-01-26 18:09:07.562 247428 DEBUG nova.compute.manager [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3ouej46o',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4c4b2733-13a7-49fe-bbfb-f3e063298716',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b5b60a57-95c9-48f2-a72a-66b14f738be8='5025e74b-c2b1-4272-a524-e7eeb678c73d'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Jan 26 13:09:07 np0005596060 podman[257948]: 2026-01-26 18:09:07.706441488 +0000 UTC m=+0.071353833 container create 14e6c92040ae7f7ee21538a62f2bb7dd794755ecc6162b7ddf0762d984c98d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:09:07 np0005596060 podman[257948]: 2026-01-26 18:09:07.658141475 +0000 UTC m=+0.023053800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:09:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:07.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:09:08 np0005596060 systemd[1]: Started libpod-conmon-14e6c92040ae7f7ee21538a62f2bb7dd794755ecc6162b7ddf0762d984c98d11.scope.
Jan 26 13:09:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:09:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:09:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:09:08 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:09:08 np0005596060 podman[257948]: 2026-01-26 18:09:08.190825609 +0000 UTC m=+0.555737944 container init 14e6c92040ae7f7ee21538a62f2bb7dd794755ecc6162b7ddf0762d984c98d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 13:09:08 np0005596060 podman[257948]: 2026-01-26 18:09:08.207585503 +0000 UTC m=+0.572497818 container start 14e6c92040ae7f7ee21538a62f2bb7dd794755ecc6162b7ddf0762d984c98d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:09:08 np0005596060 systemd[1]: libpod-14e6c92040ae7f7ee21538a62f2bb7dd794755ecc6162b7ddf0762d984c98d11.scope: Deactivated successfully.
Jan 26 13:09:08 np0005596060 vigilant_allen[257966]: 167 167
Jan 26 13:09:08 np0005596060 conmon[257966]: conmon 14e6c92040ae7f7ee215 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-14e6c92040ae7f7ee21538a62f2bb7dd794755ecc6162b7ddf0762d984c98d11.scope/container/memory.events
Jan 26 13:09:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:09:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:08.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:09:08 np0005596060 podman[257948]: 2026-01-26 18:09:08.457916094 +0000 UTC m=+0.822828399 container attach 14e6c92040ae7f7ee21538a62f2bb7dd794755ecc6162b7ddf0762d984c98d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:09:08 np0005596060 podman[257948]: 2026-01-26 18:09:08.460914028 +0000 UTC m=+0.825826363 container died 14e6c92040ae7f7ee21538a62f2bb7dd794755ecc6162b7ddf0762d984c98d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 13:09:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:08 np0005596060 nova_compute[247421]: 2026-01-26 18:09:08.762 247428 DEBUG nova.network.neutron [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Port c3bd4b07-ea7b-40da-8a33-0ac219177512 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Jan 26 13:09:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c268e3905b3b7e0b98c9494d4bb7c5eca9b17941e30f7d2a59f7d31dfd2e0488-merged.mount: Deactivated successfully.
Jan 26 13:09:09 np0005596060 nova_compute[247421]: 2026-01-26 18:09:09.161 247428 DEBUG nova.compute.manager [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmp3ouej46o',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4c4b2733-13a7-49fe-bbfb-f3e063298716',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b5b60a57-95c9-48f2-a72a-66b14f738be8='5025e74b-c2b1-4272-a524-e7eeb678c73d'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Jan 26 13:09:09 np0005596060 podman[257948]: 2026-01-26 18:09:09.220824062 +0000 UTC m=+1.585736457 container remove 14e6c92040ae7f7ee21538a62f2bb7dd794755ecc6162b7ddf0762d984c98d11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 13:09:09 np0005596060 systemd[1]: libpod-conmon-14e6c92040ae7f7ee21538a62f2bb7dd794755ecc6162b7ddf0762d984c98d11.scope: Deactivated successfully.
Jan 26 13:09:09 np0005596060 kernel: tapc3bd4b07-ea: entered promiscuous mode
Jan 26 13:09:09 np0005596060 nova_compute[247421]: 2026-01-26 18:09:09.428 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:09Z|00062|binding|INFO|Claiming lport c3bd4b07-ea7b-40da-8a33-0ac219177512 for this additional chassis.
Jan 26 13:09:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:09Z|00063|binding|INFO|c3bd4b07-ea7b-40da-8a33-0ac219177512: Claiming fa:16:3e:89:24:36 10.100.0.12
Jan 26 13:09:09 np0005596060 NetworkManager[48900]: <info>  [1769450949.4333] manager: (tapc3bd4b07-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Jan 26 13:09:09 np0005596060 nova_compute[247421]: 2026-01-26 18:09:09.453 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:09Z|00064|binding|INFO|Setting lport c3bd4b07-ea7b-40da-8a33-0ac219177512 ovn-installed in OVS
Jan 26 13:09:09 np0005596060 nova_compute[247421]: 2026-01-26 18:09:09.455 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:09 np0005596060 systemd-machined[213879]: New machine qemu-5-instance-00000007.
Jan 26 13:09:09 np0005596060 systemd[1]: Started Virtual Machine qemu-5-instance-00000007.
Jan 26 13:09:09 np0005596060 systemd-udevd[258015]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:09:09 np0005596060 NetworkManager[48900]: <info>  [1769450949.5288] device (tapc3bd4b07-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:09:09 np0005596060 NetworkManager[48900]: <info>  [1769450949.5311] device (tapc3bd4b07-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:09:09 np0005596060 podman[257998]: 2026-01-26 18:09:09.479670374 +0000 UTC m=+0.036361239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:09:09 np0005596060 podman[257998]: 2026-01-26 18:09:09.637362548 +0000 UTC m=+0.194053363 container create 2fa857cad193eda64fc261fdb8e10a3a43c6404f346fb09b0f20ddec6c7e4cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:09:09 np0005596060 systemd[1]: Started libpod-conmon-2fa857cad193eda64fc261fdb8e10a3a43c6404f346fb09b0f20ddec6c7e4cdf.scope.
Jan 26 13:09:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:09.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:09 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:09:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d699cb4d2a563717464ae77518e80e815202701ee9834b555ebc9780bb3be02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d699cb4d2a563717464ae77518e80e815202701ee9834b555ebc9780bb3be02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d699cb4d2a563717464ae77518e80e815202701ee9834b555ebc9780bb3be02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d699cb4d2a563717464ae77518e80e815202701ee9834b555ebc9780bb3be02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d699cb4d2a563717464ae77518e80e815202701ee9834b555ebc9780bb3be02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:09 np0005596060 podman[257998]: 2026-01-26 18:09:09.817719791 +0000 UTC m=+0.374410626 container init 2fa857cad193eda64fc261fdb8e10a3a43c6404f346fb09b0f20ddec6c7e4cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:09:09 np0005596060 podman[257998]: 2026-01-26 18:09:09.826885358 +0000 UTC m=+0.383576183 container start 2fa857cad193eda64fc261fdb8e10a3a43c6404f346fb09b0f20ddec6c7e4cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:09:09 np0005596060 podman[257998]: 2026-01-26 18:09:09.836563487 +0000 UTC m=+0.393254322 container attach 2fa857cad193eda64fc261fdb8e10a3a43c6404f346fb09b0f20ddec6c7e4cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:09:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 63 op/s
Jan 26 13:09:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:10.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:10 np0005596060 nova_compute[247421]: 2026-01-26 18:09:10.539 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450950.5384734, 4c4b2733-13a7-49fe-bbfb-f3e063298716 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:09:10 np0005596060 nova_compute[247421]: 2026-01-26 18:09:10.539 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] VM Started (Lifecycle Event)#033[00m
Jan 26 13:09:10 np0005596060 nova_compute[247421]: 2026-01-26 18:09:10.560 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:09:10 np0005596060 nova_compute[247421]: 2026-01-26 18:09:10.587 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:10 np0005596060 elastic_cori[258043]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:09:10 np0005596060 elastic_cori[258043]: --> relative data size: 1.0
Jan 26 13:09:10 np0005596060 elastic_cori[258043]: --> All data devices are unavailable
Jan 26 13:09:10 np0005596060 systemd[1]: libpod-2fa857cad193eda64fc261fdb8e10a3a43c6404f346fb09b0f20ddec6c7e4cdf.scope: Deactivated successfully.
Jan 26 13:09:10 np0005596060 podman[257998]: 2026-01-26 18:09:10.724662475 +0000 UTC m=+1.281353320 container died 2fa857cad193eda64fc261fdb8e10a3a43c6404f346fb09b0f20ddec6c7e4cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:09:10 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3d699cb4d2a563717464ae77518e80e815202701ee9834b555ebc9780bb3be02-merged.mount: Deactivated successfully.
Jan 26 13:09:10 np0005596060 podman[257998]: 2026-01-26 18:09:10.807526592 +0000 UTC m=+1.364217407 container remove 2fa857cad193eda64fc261fdb8e10a3a43c6404f346fb09b0f20ddec6c7e4cdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cori, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 13:09:10 np0005596060 systemd[1]: libpod-conmon-2fa857cad193eda64fc261fdb8e10a3a43c6404f346fb09b0f20ddec6c7e4cdf.scope: Deactivated successfully.
Jan 26 13:09:11 np0005596060 podman[258238]: 2026-01-26 18:09:11.535259831 +0000 UTC m=+0.086779033 container create 0cb9c2a7ebbe7abcf91ad7c6b4a247df575c63e09905f5caabd4861e1adb4314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 26 13:09:11 np0005596060 podman[258238]: 2026-01-26 18:09:11.470315588 +0000 UTC m=+0.021834820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:09:11 np0005596060 systemd[1]: Started libpod-conmon-0cb9c2a7ebbe7abcf91ad7c6b4a247df575c63e09905f5caabd4861e1adb4314.scope.
Jan 26 13:09:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:09:11 np0005596060 podman[258238]: 2026-01-26 18:09:11.739827623 +0000 UTC m=+0.291346845 container init 0cb9c2a7ebbe7abcf91ad7c6b4a247df575c63e09905f5caabd4861e1adb4314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:09:11 np0005596060 nova_compute[247421]: 2026-01-26 18:09:11.744 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450951.7439153, 4c4b2733-13a7-49fe-bbfb-f3e063298716 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:09:11 np0005596060 nova_compute[247421]: 2026-01-26 18:09:11.744 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:09:11 np0005596060 podman[258238]: 2026-01-26 18:09:11.753556442 +0000 UTC m=+0.305075654 container start 0cb9c2a7ebbe7abcf91ad7c6b4a247df575c63e09905f5caabd4861e1adb4314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:09:11 np0005596060 podman[258238]: 2026-01-26 18:09:11.757777376 +0000 UTC m=+0.309296578 container attach 0cb9c2a7ebbe7abcf91ad7c6b4a247df575c63e09905f5caabd4861e1adb4314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 13:09:11 np0005596060 friendly_wilson[258254]: 167 167
Jan 26 13:09:11 np0005596060 systemd[1]: libpod-0cb9c2a7ebbe7abcf91ad7c6b4a247df575c63e09905f5caabd4861e1adb4314.scope: Deactivated successfully.
Jan 26 13:09:11 np0005596060 podman[258238]: 2026-01-26 18:09:11.760519444 +0000 UTC m=+0.312038636 container died 0cb9c2a7ebbe7abcf91ad7c6b4a247df575c63e09905f5caabd4861e1adb4314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 13:09:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:11.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:11 np0005596060 nova_compute[247421]: 2026-01-26 18:09:11.765 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:09:11 np0005596060 nova_compute[247421]: 2026-01-26 18:09:11.771 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:09:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d148a6765cec7656cf0a3929c8811913c93ed941f699ccfeef555e7793c5c646-merged.mount: Deactivated successfully.
Jan 26 13:09:11 np0005596060 nova_compute[247421]: 2026-01-26 18:09:11.802 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 26 13:09:11 np0005596060 podman[258238]: 2026-01-26 18:09:11.804779587 +0000 UTC m=+0.356298789 container remove 0cb9c2a7ebbe7abcf91ad7c6b4a247df575c63e09905f5caabd4861e1adb4314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 13:09:11 np0005596060 systemd[1]: libpod-conmon-0cb9c2a7ebbe7abcf91ad7c6b4a247df575c63e09905f5caabd4861e1adb4314.scope: Deactivated successfully.
Jan 26 13:09:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 175 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 672 KiB/s wr, 73 op/s
Jan 26 13:09:12 np0005596060 podman[258277]: 2026-01-26 18:09:12.00132157 +0000 UTC m=+0.051647777 container create f43522038ab8ed598f48c3b11ee7dba9d8eee30867294742f9420a3c2adc35cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:09:12 np0005596060 systemd[1]: Started libpod-conmon-f43522038ab8ed598f48c3b11ee7dba9d8eee30867294742f9420a3c2adc35cd.scope.
Jan 26 13:09:12 np0005596060 podman[258277]: 2026-01-26 18:09:11.980462605 +0000 UTC m=+0.030788822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:09:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:09:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919355fe4a71c5f06e75127ec0f4316f9cf3950fa12054b1d8c9bdd80f941497/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919355fe4a71c5f06e75127ec0f4316f9cf3950fa12054b1d8c9bdd80f941497/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919355fe4a71c5f06e75127ec0f4316f9cf3950fa12054b1d8c9bdd80f941497/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/919355fe4a71c5f06e75127ec0f4316f9cf3950fa12054b1d8c9bdd80f941497/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:12 np0005596060 podman[258277]: 2026-01-26 18:09:12.104062767 +0000 UTC m=+0.154388984 container init f43522038ab8ed598f48c3b11ee7dba9d8eee30867294742f9420a3c2adc35cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 13:09:12 np0005596060 podman[258277]: 2026-01-26 18:09:12.122445311 +0000 UTC m=+0.172771538 container start f43522038ab8ed598f48c3b11ee7dba9d8eee30867294742f9420a3c2adc35cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 26 13:09:12 np0005596060 podman[258277]: 2026-01-26 18:09:12.130690064 +0000 UTC m=+0.181016251 container attach f43522038ab8ed598f48c3b11ee7dba9d8eee30867294742f9420a3c2adc35cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 26 13:09:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:09:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:12.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:09:12 np0005596060 nova_compute[247421]: 2026-01-26 18:09:12.548 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]: {
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:    "1": [
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:        {
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "devices": [
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "/dev/loop3"
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            ],
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "lv_name": "ceph_lv0",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "lv_size": "7511998464",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "name": "ceph_lv0",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "tags": {
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.cluster_name": "ceph",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.crush_device_class": "",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.encrypted": "0",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.osd_id": "1",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.type": "block",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:                "ceph.vdo": "0"
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            },
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "type": "block",
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:            "vg_name": "ceph_vg0"
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:        }
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]:    ]
Jan 26 13:09:12 np0005596060 hardcore_saha[258294]: }
Jan 26 13:09:13 np0005596060 systemd[1]: libpod-f43522038ab8ed598f48c3b11ee7dba9d8eee30867294742f9420a3c2adc35cd.scope: Deactivated successfully.
Jan 26 13:09:13 np0005596060 podman[258304]: 2026-01-26 18:09:13.054020894 +0000 UTC m=+0.027176552 container died f43522038ab8ed598f48c3b11ee7dba9d8eee30867294742f9420a3c2adc35cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:09:13 np0005596060 systemd[1]: var-lib-containers-storage-overlay-919355fe4a71c5f06e75127ec0f4316f9cf3950fa12054b1d8c9bdd80f941497-merged.mount: Deactivated successfully.
Jan 26 13:09:13 np0005596060 podman[258304]: 2026-01-26 18:09:13.116293791 +0000 UTC m=+0.089449439 container remove f43522038ab8ed598f48c3b11ee7dba9d8eee30867294742f9420a3c2adc35cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:09:13 np0005596060 systemd[1]: libpod-conmon-f43522038ab8ed598f48c3b11ee7dba9d8eee30867294742f9420a3c2adc35cd.scope: Deactivated successfully.
Jan 26 13:09:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:13Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:24:36 10.100.0.12
Jan 26 13:09:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:13Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:24:36 10.100.0.12
Jan 26 13:09:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:13Z|00065|binding|INFO|Claiming lport c3bd4b07-ea7b-40da-8a33-0ac219177512 for this chassis.
Jan 26 13:09:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:13Z|00066|binding|INFO|c3bd4b07-ea7b-40da-8a33-0ac219177512: Claiming fa:16:3e:89:24:36 10.100.0.12
Jan 26 13:09:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:13Z|00067|binding|INFO|Setting lport c3bd4b07-ea7b-40da-8a33-0ac219177512 up in Southbound
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.659 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:24:36 10.100.0.12'], port_security=['fa:16:3e:89:24:36 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4c4b2733-13a7-49fe-bbfb-f3e063298716', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0516cc55-93b8-4bf2-b595-d07702fa255b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1f2cad350784d7eae39fc23fb032500', 'neutron:revision_number': '11', 'neutron:security_group_ids': '4e1bd851-4cc2-4677-be2e-39f74460bffd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db9bad5b-1a88-4481-85c1-c131f59dea19, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=c3bd4b07-ea7b-40da-8a33-0ac219177512) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.660 159331 INFO neutron.agent.ovn.metadata.agent [-] Port c3bd4b07-ea7b-40da-8a33-0ac219177512 in datapath 0516cc55-93b8-4bf2-b595-d07702fa255b bound to our chassis#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.661 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0516cc55-93b8-4bf2-b595-d07702fa255b#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.682 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a9b6c4-24a3-4e1a-bc8c-bfcb51c60b29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.717 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[aac257c3-1e19-47a5-ae99-a3b8122c211f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.721 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[19248832-4ead-427c-a36c-5db9283d662c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.753 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[86eb8db0-b3ad-4c6c-a8f7-2b7add583922]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:13.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.776 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[3c217d13-c8ec-4b8e-b087-0586b1b905f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0516cc55-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:40:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1162, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1162, 'tx_bytes': 530, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464569, 'reachable_time': 31510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258453, 'error': None, 'target': 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.792 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5904b89f-4ce1-4a8c-b4ca-e7099dfed499]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0516cc55-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464582, 'tstamp': 464582}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258465, 'error': None, 'target': 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0516cc55-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464587, 'tstamp': 464587}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258465, 'error': None, 'target': 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.795 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0516cc55-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:13 np0005596060 nova_compute[247421]: 2026-01-26 18:09:13.796 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:13 np0005596060 nova_compute[247421]: 2026-01-26 18:09:13.798 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.799 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0516cc55-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.799 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.800 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0516cc55-90, col_values=(('external_ids', {'iface-id': '46cfbba6-430a-495c-9d6a-60cf58c877d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:13.800 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:09:13 np0005596060 nova_compute[247421]: 2026-01-26 18:09:13.801 247428 INFO nova.compute.manager [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Post operation of migration started#033[00m
Jan 26 13:09:13 np0005596060 podman[258468]: 2026-01-26 18:09:13.868987786 +0000 UTC m=+0.025438949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:09:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 188 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 80 op/s
Jan 26 13:09:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:09:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:09:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:09:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:09:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:09:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:09:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:14.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:14 np0005596060 podman[258468]: 2026-01-26 18:09:14.261944479 +0000 UTC m=+0.418395642 container create 1f373b762ad78f9cac9e28f0a2e15cfceaecb76b3c1367fc8f073e854b04d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:09:14 np0005596060 nova_compute[247421]: 2026-01-26 18:09:14.267 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "refresh_cache-4c4b2733-13a7-49fe-bbfb-f3e063298716" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:09:14 np0005596060 nova_compute[247421]: 2026-01-26 18:09:14.267 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquired lock "refresh_cache-4c4b2733-13a7-49fe-bbfb-f3e063298716" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:09:14 np0005596060 nova_compute[247421]: 2026-01-26 18:09:14.267 247428 DEBUG nova.network.neutron [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:09:14 np0005596060 systemd[1]: Started libpod-conmon-1f373b762ad78f9cac9e28f0a2e15cfceaecb76b3c1367fc8f073e854b04d0f7.scope.
Jan 26 13:09:14 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:09:14 np0005596060 podman[258468]: 2026-01-26 18:09:14.354825563 +0000 UTC m=+0.511276736 container init 1f373b762ad78f9cac9e28f0a2e15cfceaecb76b3c1367fc8f073e854b04d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 13:09:14 np0005596060 podman[258468]: 2026-01-26 18:09:14.364793869 +0000 UTC m=+0.521245022 container start 1f373b762ad78f9cac9e28f0a2e15cfceaecb76b3c1367fc8f073e854b04d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 13:09:14 np0005596060 podman[258468]: 2026-01-26 18:09:14.36845487 +0000 UTC m=+0.524906023 container attach 1f373b762ad78f9cac9e28f0a2e15cfceaecb76b3c1367fc8f073e854b04d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:09:14 np0005596060 pensive_goodall[258484]: 167 167
Jan 26 13:09:14 np0005596060 systemd[1]: libpod-1f373b762ad78f9cac9e28f0a2e15cfceaecb76b3c1367fc8f073e854b04d0f7.scope: Deactivated successfully.
Jan 26 13:09:14 np0005596060 conmon[258484]: conmon 1f373b762ad78f9cac9e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f373b762ad78f9cac9e28f0a2e15cfceaecb76b3c1367fc8f073e854b04d0f7.scope/container/memory.events
Jan 26 13:09:14 np0005596060 podman[258468]: 2026-01-26 18:09:14.372469119 +0000 UTC m=+0.528920272 container died 1f373b762ad78f9cac9e28f0a2e15cfceaecb76b3c1367fc8f073e854b04d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 26 13:09:14 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7490408b7a380ceca7b4caa4d7ef1267cecc65388d7dd5d76deb7f77fbeecae9-merged.mount: Deactivated successfully.
Jan 26 13:09:14 np0005596060 podman[258468]: 2026-01-26 18:09:14.411086512 +0000 UTC m=+0.567537665 container remove 1f373b762ad78f9cac9e28f0a2e15cfceaecb76b3c1367fc8f073e854b04d0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_goodall, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 13:09:14 np0005596060 systemd[1]: libpod-conmon-1f373b762ad78f9cac9e28f0a2e15cfceaecb76b3c1367fc8f073e854b04d0f7.scope: Deactivated successfully.
Jan 26 13:09:14 np0005596060 podman[258507]: 2026-01-26 18:09:14.629652149 +0000 UTC m=+0.063239072 container create 786a060273e5b486c9de113acc1a57734d01540da08a21f8a1de8b8562840fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:09:14 np0005596060 systemd[1]: Started libpod-conmon-786a060273e5b486c9de113acc1a57734d01540da08a21f8a1de8b8562840fb1.scope.
Jan 26 13:09:14 np0005596060 podman[258507]: 2026-01-26 18:09:14.60942561 +0000 UTC m=+0.043012623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:09:14 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:09:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfd565aa85bbdca54325eb5081d1a47d1e89667ca6d221c9e8425a8b86f14bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfd565aa85bbdca54325eb5081d1a47d1e89667ca6d221c9e8425a8b86f14bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfd565aa85bbdca54325eb5081d1a47d1e89667ca6d221c9e8425a8b86f14bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdfd565aa85bbdca54325eb5081d1a47d1e89667ca6d221c9e8425a8b86f14bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:09:14 np0005596060 podman[258507]: 2026-01-26 18:09:14.723384544 +0000 UTC m=+0.156971477 container init 786a060273e5b486c9de113acc1a57734d01540da08a21f8a1de8b8562840fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 13:09:14 np0005596060 podman[258507]: 2026-01-26 18:09:14.730174251 +0000 UTC m=+0.163761184 container start 786a060273e5b486c9de113acc1a57734d01540da08a21f8a1de8b8562840fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:09:14 np0005596060 podman[258507]: 2026-01-26 18:09:14.734003016 +0000 UTC m=+0.167589959 container attach 786a060273e5b486c9de113acc1a57734d01540da08a21f8a1de8b8562840fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:09:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:14.741 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:14.742 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:14.743 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:15 np0005596060 nova_compute[247421]: 2026-01-26 18:09:15.589 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:15 np0005596060 strange_noyce[258523]: {
Jan 26 13:09:15 np0005596060 strange_noyce[258523]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:09:15 np0005596060 strange_noyce[258523]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:09:15 np0005596060 strange_noyce[258523]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:09:15 np0005596060 strange_noyce[258523]:        "osd_id": 1,
Jan 26 13:09:15 np0005596060 strange_noyce[258523]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:09:15 np0005596060 strange_noyce[258523]:        "type": "bluestore"
Jan 26 13:09:15 np0005596060 strange_noyce[258523]:    }
Jan 26 13:09:15 np0005596060 strange_noyce[258523]: }
Jan 26 13:09:15 np0005596060 systemd[1]: libpod-786a060273e5b486c9de113acc1a57734d01540da08a21f8a1de8b8562840fb1.scope: Deactivated successfully.
Jan 26 13:09:15 np0005596060 podman[258507]: 2026-01-26 18:09:15.646875447 +0000 UTC m=+1.080462380 container died 786a060273e5b486c9de113acc1a57734d01540da08a21f8a1de8b8562840fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 13:09:15 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fdfd565aa85bbdca54325eb5081d1a47d1e89667ca6d221c9e8425a8b86f14bb-merged.mount: Deactivated successfully.
Jan 26 13:09:15 np0005596060 podman[258507]: 2026-01-26 18:09:15.707702099 +0000 UTC m=+1.141289012 container remove 786a060273e5b486c9de113acc1a57734d01540da08a21f8a1de8b8562840fb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:09:15 np0005596060 systemd[1]: libpod-conmon-786a060273e5b486c9de113acc1a57734d01540da08a21f8a1de8b8562840fb1.scope: Deactivated successfully.
Jan 26 13:09:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:09:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:09:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:09:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:15.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:09:15 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 1f567d87-6bc1-405d-9909-e107284acfe6 does not exist
Jan 26 13:09:15 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev cfe08078-cd5a-4303-a9f4-5d700d7af9d5 does not exist
Jan 26 13:09:15 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev eda5dd0b-54ed-440d-b4dd-ce32962f1a42 does not exist
Jan 26 13:09:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 188 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 1.9 MiB/s wr, 29 op/s
Jan 26 13:09:16 np0005596060 nova_compute[247421]: 2026-01-26 18:09:16.224 247428 DEBUG nova.network.neutron [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Updating instance_info_cache with network_info: [{"id": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "address": "fa:16:3e:89:24:36", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3bd4b07-ea", "ovs_interfaceid": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:09:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:16.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:16 np0005596060 nova_compute[247421]: 2026-01-26 18:09:16.257 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Releasing lock "refresh_cache-4c4b2733-13a7-49fe-bbfb-f3e063298716" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:09:16 np0005596060 nova_compute[247421]: 2026-01-26 18:09:16.274 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:16 np0005596060 nova_compute[247421]: 2026-01-26 18:09:16.275 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:16 np0005596060 nova_compute[247421]: 2026-01-26 18:09:16.275 247428 DEBUG oslo_concurrency.lockutils [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:16 np0005596060 nova_compute[247421]: 2026-01-26 18:09:16.281 247428 INFO nova.virt.libvirt.driver [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Jan 26 13:09:16 np0005596060 virtqemud[246749]: Domain id=5 name='instance-00000007' uuid=4c4b2733-13a7-49fe-bbfb-f3e063298716 is tainted: custom-monitor
Jan 26 13:09:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:09:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:09:17 np0005596060 nova_compute[247421]: 2026-01-26 18:09:17.288 247428 INFO nova.virt.libvirt.driver [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Jan 26 13:09:17 np0005596060 nova_compute[247421]: 2026-01-26 18:09:17.550 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:17.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 26 13:09:18 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 13:09:18 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 13:09:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:18.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:18 np0005596060 nova_compute[247421]: 2026-01-26 18:09:18.295 247428 INFO nova.virt.libvirt.driver [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Jan 26 13:09:18 np0005596060 nova_compute[247421]: 2026-01-26 18:09:18.299 247428 DEBUG nova.compute.manager [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:09:18 np0005596060 nova_compute[247421]: 2026-01-26 18:09:18.323 247428 DEBUG nova.objects.instance [None req-101dd287-f7f0-4e5c-b813-3d7fc5d02ccf 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 26 13:09:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:19.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 26 13:09:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:09:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:20.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:09:20 np0005596060 nova_compute[247421]: 2026-01-26 18:09:20.592 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:21.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 26 13:09:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:22.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:22 np0005596060 nova_compute[247421]: 2026-01-26 18:09:22.551 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:23.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 330 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Jan 26 13:09:24 np0005596060 nova_compute[247421]: 2026-01-26 18:09:24.111 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Check if temp file /var/lib/nova/instances/tmpe_c_rmgx exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 26 13:09:24 np0005596060 nova_compute[247421]: 2026-01-26 18:09:24.112 247428 DEBUG nova.compute.manager [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpe_c_rmgx',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4c4b2733-13a7-49fe-bbfb-f3e063298716',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 26 13:09:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:24.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:25 np0005596060 nova_compute[247421]: 2026-01-26 18:09:25.595 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:25.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 269 KiB/s rd, 195 KiB/s wr, 39 op/s
Jan 26 13:09:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:26.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:27 np0005596060 nova_compute[247421]: 2026-01-26 18:09:27.553 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:27.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 269 KiB/s rd, 195 KiB/s wr, 39 op/s
Jan 26 13:09:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:28.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:29.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:29 np0005596060 nova_compute[247421]: 2026-01-26 18:09:29.914 247428 DEBUG nova.compute.manager [req-dad602ec-b9c5-4312-933a-0884609ffb86 req-d7b24a9e-8bb9-46aa-93f8-5ace974e6d8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received event network-vif-unplugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:09:29 np0005596060 nova_compute[247421]: 2026-01-26 18:09:29.914 247428 DEBUG oslo_concurrency.lockutils [req-dad602ec-b9c5-4312-933a-0884609ffb86 req-d7b24a9e-8bb9-46aa-93f8-5ace974e6d8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:29 np0005596060 nova_compute[247421]: 2026-01-26 18:09:29.914 247428 DEBUG oslo_concurrency.lockutils [req-dad602ec-b9c5-4312-933a-0884609ffb86 req-d7b24a9e-8bb9-46aa-93f8-5ace974e6d8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:29 np0005596060 nova_compute[247421]: 2026-01-26 18:09:29.915 247428 DEBUG oslo_concurrency.lockutils [req-dad602ec-b9c5-4312-933a-0884609ffb86 req-d7b24a9e-8bb9-46aa-93f8-5ace974e6d8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:29 np0005596060 nova_compute[247421]: 2026-01-26 18:09:29.915 247428 DEBUG nova.compute.manager [req-dad602ec-b9c5-4312-933a-0884609ffb86 req-d7b24a9e-8bb9-46aa-93f8-5ace974e6d8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] No waiting events found dispatching network-vif-unplugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:09:29 np0005596060 nova_compute[247421]: 2026-01-26 18:09:29.915 247428 DEBUG nova.compute.manager [req-dad602ec-b9c5-4312-933a-0884609ffb86 req-d7b24a9e-8bb9-46aa-93f8-5ace974e6d8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received event network-vif-unplugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:09:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Jan 26 13:09:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:30.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:30 np0005596060 nova_compute[247421]: 2026-01-26 18:09:30.597 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.064 247428 INFO nova.compute.manager [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Took 5.66 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.065 247428 DEBUG nova.compute.manager [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.103 247428 DEBUG nova.compute.manager [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpe_c_rmgx',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='4c4b2733-13a7-49fe-bbfb-f3e063298716',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(152f8f11-07d7-44a8-a790-2be851474e39),old_vol_attachment_ids={b5b60a57-95c9-48f2-a72a-66b14f738be8='ac2346bc-53c2-4bf5-b1e2-545f402e338e'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.108 247428 DEBUG nova.objects.instance [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lazy-loading 'migration_context' on Instance uuid 4c4b2733-13a7-49fe-bbfb-f3e063298716 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.110 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.111 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.112 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.130 247428 DEBUG nova.virt.libvirt.migration [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Find same serial number: pos=1, serial=b5b60a57-95c9-48f2-a72a-66b14f738be8 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.132 247428 DEBUG nova.virt.libvirt.vif [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T18:08:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-634605113',display_name='tempest-LiveMigrationTest-server-634605113',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-634605113',id=7,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:08:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b1f2cad350784d7eae39fc23fb032500',ramdisk_id='',reservation_id='r-8pp60248',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-877386369',owner_user_name='tempest-LiveMigrationTest-877386369-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:09:18Z,user_data=None,user_id='9e3f505042e7463683259f02e8e59eca',uuid=4c4b2733-13a7-49fe-bbfb-f3e063298716,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "address": "fa:16:3e:89:24:36", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapc3bd4b07-ea", "ovs_interfaceid": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.132 247428 DEBUG nova.network.os_vif_util [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Converting VIF {"id": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "address": "fa:16:3e:89:24:36", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapc3bd4b07-ea", "ovs_interfaceid": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.133 247428 DEBUG nova.network.os_vif_util [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:24:36,bridge_name='br-int',has_traffic_filtering=True,id=c3bd4b07-ea7b-40da-8a33-0ac219177512,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3bd4b07-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.133 247428 DEBUG nova.virt.libvirt.migration [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Updating guest XML with vif config: <interface type="ethernet">
Jan 26 13:09:31 np0005596060 nova_compute[247421]:  <mac address="fa:16:3e:89:24:36"/>
Jan 26 13:09:31 np0005596060 nova_compute[247421]:  <model type="virtio"/>
Jan 26 13:09:31 np0005596060 nova_compute[247421]:  <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:09:31 np0005596060 nova_compute[247421]:  <mtu size="1442"/>
Jan 26 13:09:31 np0005596060 nova_compute[247421]:  <target dev="tapc3bd4b07-ea"/>
Jan 26 13:09:31 np0005596060 nova_compute[247421]: </interface>
Jan 26 13:09:31 np0005596060 nova_compute[247421]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.133 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.615 247428 DEBUG nova.virt.libvirt.migration [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.616 247428 INFO nova.virt.libvirt.migration [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 26 13:09:31 np0005596060 nova_compute[247421]: 2026-01-26 18:09:31.696 247428 INFO nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 26 13:09:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:31.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.200 247428 DEBUG nova.virt.libvirt.migration [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.201 247428 DEBUG nova.virt.libvirt.migration [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 26 13:09:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:32.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.272 247428 DEBUG nova.compute.manager [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received event network-vif-plugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.273 247428 DEBUG oslo_concurrency.lockutils [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.273 247428 DEBUG oslo_concurrency.lockutils [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.273 247428 DEBUG oslo_concurrency.lockutils [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.273 247428 DEBUG nova.compute.manager [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] No waiting events found dispatching network-vif-plugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.273 247428 WARNING nova.compute.manager [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received unexpected event network-vif-plugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 for instance with vm_state active and task_state migrating.#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.273 247428 DEBUG nova.compute.manager [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received event network-changed-c3bd4b07-ea7b-40da-8a33-0ac219177512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.274 247428 DEBUG nova.compute.manager [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Refreshing instance network info cache due to event network-changed-c3bd4b07-ea7b-40da-8a33-0ac219177512. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.274 247428 DEBUG oslo_concurrency.lockutils [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-4c4b2733-13a7-49fe-bbfb-f3e063298716" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.274 247428 DEBUG oslo_concurrency.lockutils [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-4c4b2733-13a7-49fe-bbfb-f3e063298716" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.274 247428 DEBUG nova.network.neutron [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Refreshing network info cache for port c3bd4b07-ea7b-40da-8a33-0ac219177512 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.555 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.705 247428 DEBUG nova.virt.libvirt.migration [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.705 247428 DEBUG nova.virt.libvirt.migration [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.747 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769450972.747583, 4c4b2733-13a7-49fe-bbfb-f3e063298716 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.748 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.778 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.783 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:09:32 np0005596060 nova_compute[247421]: 2026-01-26 18:09:32.804 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 26 13:09:33 np0005596060 kernel: tapc3bd4b07-ea (unregistering): left promiscuous mode
Jan 26 13:09:33 np0005596060 NetworkManager[48900]: <info>  [1769450973.0724] device (tapc3bd4b07-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.083 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:33 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:33Z|00068|binding|INFO|Releasing lport c3bd4b07-ea7b-40da-8a33-0ac219177512 from this chassis (sb_readonly=0)
Jan 26 13:09:33 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:33Z|00069|binding|INFO|Setting lport c3bd4b07-ea7b-40da-8a33-0ac219177512 down in Southbound
Jan 26 13:09:33 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:33Z|00070|binding|INFO|Removing iface tapc3bd4b07-ea ovn-installed in OVS
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.089 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.093 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:24:36 10.100.0.12'], port_security=['fa:16:3e:89:24:36 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '9838f21e-c1ce-4cfa-829e-a12b9d657d8a'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4c4b2733-13a7-49fe-bbfb-f3e063298716', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0516cc55-93b8-4bf2-b595-d07702fa255b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1f2cad350784d7eae39fc23fb032500', 'neutron:revision_number': '18', 'neutron:security_group_ids': '4e1bd851-4cc2-4677-be2e-39f74460bffd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db9bad5b-1a88-4481-85c1-c131f59dea19, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=c3bd4b07-ea7b-40da-8a33-0ac219177512) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.094 159331 INFO neutron.agent.ovn.metadata.agent [-] Port c3bd4b07-ea7b-40da-8a33-0ac219177512 in datapath 0516cc55-93b8-4bf2-b595-d07702fa255b unbound from our chassis#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.096 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0516cc55-93b8-4bf2-b595-d07702fa255b#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.106 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.116 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[54c16fdb-8f67-4fb9-8338-d4da3ba0d180]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.147 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[5d88a162-a92f-4e75-8e20-7958c61e032a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.151 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f16b5d-1e34-4fa9-9c8f-fe9ee9d936f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:33 np0005596060 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 26 13:09:33 np0005596060 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000007.scope: Consumed 3.412s CPU time.
Jan 26 13:09:33 np0005596060 systemd-machined[213879]: Machine qemu-5-instance-00000007 terminated.
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.185 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[2a04f5b4-4e9b-4bab-8744-46aefa9b3562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.201 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb68397-9df8-40b1-9639-642747cc4a49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0516cc55-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:40:ef'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 29, 'tx_packets': 9, 'rx_bytes': 1498, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 29, 'tx_packets': 9, 'rx_bytes': 1498, 'tx_bytes': 614, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464569, 'reachable_time': 31510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 5, 'outoctets': 376, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 5, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 376, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 5, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258682, 'error': None, 'target': 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.219 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff1d953-41ee-47bb-84f3-6331020a3f1d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap0516cc55-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464582, 'tstamp': 464582}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258683, 'error': None, 'target': 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap0516cc55-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464587, 'tstamp': 464587}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258683, 'error': None, 'target': 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.221 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0516cc55-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.223 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.227 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.228 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0516cc55-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.228 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.228 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0516cc55-90, col_values=(('external_ids', {'iface-id': '46cfbba6-430a-495c-9d6a-60cf58c877d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:33.228 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:09:33 np0005596060 virtqemud[246749]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-b5b60a57-95c9-48f2-a72a-66b14f738be8: No such file or directory
Jan 26 13:09:33 np0005596060 virtqemud[246749]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-b5b60a57-95c9-48f2-a72a-66b14f738be8: No such file or directory
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.368 247428 DEBUG nova.virt.libvirt.guest [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.368 247428 INFO nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Migration operation has completed#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.368 247428 INFO nova.compute.manager [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] _post_live_migration() is started..#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.370 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.370 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.371 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.376 247428 DEBUG nova.compute.manager [req-98bca9a3-36b8-4a09-a66c-c29e642f40b9 req-08323591-471e-4cc7-b5cf-31dcee0e6524 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received event network-vif-unplugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.377 247428 DEBUG oslo_concurrency.lockutils [req-98bca9a3-36b8-4a09-a66c-c29e642f40b9 req-08323591-471e-4cc7-b5cf-31dcee0e6524 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.377 247428 DEBUG oslo_concurrency.lockutils [req-98bca9a3-36b8-4a09-a66c-c29e642f40b9 req-08323591-471e-4cc7-b5cf-31dcee0e6524 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.377 247428 DEBUG oslo_concurrency.lockutils [req-98bca9a3-36b8-4a09-a66c-c29e642f40b9 req-08323591-471e-4cc7-b5cf-31dcee0e6524 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.377 247428 DEBUG nova.compute.manager [req-98bca9a3-36b8-4a09-a66c-c29e642f40b9 req-08323591-471e-4cc7-b5cf-31dcee0e6524 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] No waiting events found dispatching network-vif-unplugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:09:33 np0005596060 nova_compute[247421]: 2026-01-26 18:09:33.378 247428 DEBUG nova.compute.manager [req-98bca9a3-36b8-4a09-a66c-c29e642f40b9 req-08323591-471e-4cc7-b5cf-31dcee0e6524 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received event network-vif-unplugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:09:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:33.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 26 13:09:34 np0005596060 nova_compute[247421]: 2026-01-26 18:09:34.090 247428 DEBUG nova.network.neutron [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Updated VIF entry in instance network info cache for port c3bd4b07-ea7b-40da-8a33-0ac219177512. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:09:34 np0005596060 nova_compute[247421]: 2026-01-26 18:09:34.091 247428 DEBUG nova.network.neutron [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Updating instance_info_cache with network_info: [{"id": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "address": "fa:16:3e:89:24:36", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3bd4b07-ea", "ovs_interfaceid": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:09:34 np0005596060 nova_compute[247421]: 2026-01-26 18:09:34.110 247428 DEBUG oslo_concurrency.lockutils [req-57bf6633-1b17-4931-ac18-6b535761369a req-997e3710-4918-4764-9e92-66f6dec16285 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-4c4b2733-13a7-49fe-bbfb-f3e063298716" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:09:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:34.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:34 np0005596060 nova_compute[247421]: 2026-01-26 18:09:34.695 247428 DEBUG nova.compute.manager [req-2eddb719-caeb-4cf0-824d-ad27407e4434 req-abdfcd78-8cca-402f-abdd-8148a2f542b5 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received event network-vif-unplugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:09:34 np0005596060 nova_compute[247421]: 2026-01-26 18:09:34.696 247428 DEBUG oslo_concurrency.lockutils [req-2eddb719-caeb-4cf0-824d-ad27407e4434 req-abdfcd78-8cca-402f-abdd-8148a2f542b5 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:34 np0005596060 nova_compute[247421]: 2026-01-26 18:09:34.696 247428 DEBUG oslo_concurrency.lockutils [req-2eddb719-caeb-4cf0-824d-ad27407e4434 req-abdfcd78-8cca-402f-abdd-8148a2f542b5 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:34 np0005596060 nova_compute[247421]: 2026-01-26 18:09:34.697 247428 DEBUG oslo_concurrency.lockutils [req-2eddb719-caeb-4cf0-824d-ad27407e4434 req-abdfcd78-8cca-402f-abdd-8148a2f542b5 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:34 np0005596060 nova_compute[247421]: 2026-01-26 18:09:34.697 247428 DEBUG nova.compute.manager [req-2eddb719-caeb-4cf0-824d-ad27407e4434 req-abdfcd78-8cca-402f-abdd-8148a2f542b5 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] No waiting events found dispatching network-vif-unplugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:09:34 np0005596060 nova_compute[247421]: 2026-01-26 18:09:34.697 247428 DEBUG nova.compute.manager [req-2eddb719-caeb-4cf0-824d-ad27407e4434 req-abdfcd78-8cca-402f-abdd-8148a2f542b5 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received event network-vif-unplugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.027 247428 DEBUG nova.network.neutron [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Activated binding for port c3bd4b07-ea7b-40da-8a33-0ac219177512 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.027 247428 DEBUG nova.compute.manager [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "address": "fa:16:3e:89:24:36", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3bd4b07-ea", "ovs_interfaceid": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.029 247428 DEBUG nova.virt.libvirt.vif [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T18:08:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-634605113',display_name='tempest-LiveMigrationTest-server-634605113',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-634605113',id=7,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:08:58Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b1f2cad350784d7eae39fc23fb032500',ramdisk_id='',reservation_id='r-8pp60248',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-877386369',owner_user_name='tempest-LiveMigrationTest-877386369-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:09:22Z,user_data=None,user_id='9e3f505042e7463683259f02e8e59eca',uuid=4c4b2733-13a7-49fe-bbfb-f3e063298716,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "address": "fa:16:3e:89:24:36", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3bd4b07-ea", "ovs_interfaceid": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.030 247428 DEBUG nova.network.os_vif_util [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Converting VIF {"id": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "address": "fa:16:3e:89:24:36", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3bd4b07-ea", "ovs_interfaceid": "c3bd4b07-ea7b-40da-8a33-0ac219177512", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.031 247428 DEBUG nova.network.os_vif_util [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:24:36,bridge_name='br-int',has_traffic_filtering=True,id=c3bd4b07-ea7b-40da-8a33-0ac219177512,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3bd4b07-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.031 247428 DEBUG os_vif [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:24:36,bridge_name='br-int',has_traffic_filtering=True,id=c3bd4b07-ea7b-40da-8a33-0ac219177512,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3bd4b07-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.034 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.034 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3bd4b07-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.036 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.038 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.041 247428 INFO os_vif [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:24:36,bridge_name='br-int',has_traffic_filtering=True,id=c3bd4b07-ea7b-40da-8a33-0ac219177512,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3bd4b07-ea')#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.042 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.042 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.043 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.043 247428 DEBUG nova.compute.manager [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.044 247428 INFO nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Deleting instance files /var/lib/nova/instances/4c4b2733-13a7-49fe-bbfb-f3e063298716_del#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.045 247428 INFO nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Deletion of /var/lib/nova/instances/4c4b2733-13a7-49fe-bbfb-f3e063298716_del complete#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.491 247428 DEBUG nova.compute.manager [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received event network-vif-plugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.491 247428 DEBUG oslo_concurrency.lockutils [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.492 247428 DEBUG oslo_concurrency.lockutils [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.492 247428 DEBUG oslo_concurrency.lockutils [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.492 247428 DEBUG nova.compute.manager [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] No waiting events found dispatching network-vif-plugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.492 247428 WARNING nova.compute.manager [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received unexpected event network-vif-plugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 for instance with vm_state active and task_state migrating.#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.493 247428 DEBUG nova.compute.manager [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received event network-vif-plugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.493 247428 DEBUG oslo_concurrency.lockutils [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.493 247428 DEBUG oslo_concurrency.lockutils [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.493 247428 DEBUG oslo_concurrency.lockutils [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.493 247428 DEBUG nova.compute.manager [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] No waiting events found dispatching network-vif-plugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.494 247428 WARNING nova.compute.manager [req-8bc7302b-0d14-48a5-8909-8232be041f47 req-1966e028-729a-4a9d-bafa-1b8fb08bb448 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Received unexpected event network-vif-plugged-c3bd4b07-ea7b-40da-8a33-0ac219177512 for instance with vm_state active and task_state migrating.#033[00m
Jan 26 13:09:35 np0005596060 nova_compute[247421]: 2026-01-26 18:09:35.599 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:35.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 4.0 KiB/s rd, 4 op/s
Jan 26 13:09:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:36.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:36 np0005596060 podman[258698]: 2026-01-26 18:09:36.814338027 +0000 UTC m=+0.064127355 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 26 13:09:36 np0005596060 podman[258699]: 2026-01-26 18:09:36.869009597 +0000 UTC m=+0.112871698 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 13:09:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:37.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 26 13:09:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:38.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:39.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 26 13:09:40 np0005596060 nova_compute[247421]: 2026-01-26 18:09:40.036 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:40.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:40 np0005596060 nova_compute[247421]: 2026-01-26 18:09:40.601 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:40 np0005596060 nova_compute[247421]: 2026-01-26 18:09:40.899 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:40 np0005596060 nova_compute[247421]: 2026-01-26 18:09:40.900 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:40 np0005596060 nova_compute[247421]: 2026-01-26 18:09:40.901 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "4c4b2733-13a7-49fe-bbfb-f3e063298716-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:40 np0005596060 nova_compute[247421]: 2026-01-26 18:09:40.929 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:40 np0005596060 nova_compute[247421]: 2026-01-26 18:09:40.929 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:40 np0005596060 nova_compute[247421]: 2026-01-26 18:09:40.930 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:40 np0005596060 nova_compute[247421]: 2026-01-26 18:09:40.930 247428 DEBUG nova.compute.resource_tracker [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:09:40 np0005596060 nova_compute[247421]: 2026-01-26 18:09:40.931 247428 DEBUG oslo_concurrency.processutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:09:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403378656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:09:41 np0005596060 nova_compute[247421]: 2026-01-26 18:09:41.604 247428 DEBUG oslo_concurrency.processutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:41 np0005596060 nova_compute[247421]: 2026-01-26 18:09:41.676 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:09:41 np0005596060 nova_compute[247421]: 2026-01-26 18:09:41.677 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:09:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:41.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:41 np0005596060 nova_compute[247421]: 2026-01-26 18:09:41.923 247428 WARNING nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:09:41 np0005596060 nova_compute[247421]: 2026-01-26 18:09:41.926 247428 DEBUG nova.compute.resource_tracker [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4592MB free_disk=20.94263458251953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:09:41 np0005596060 nova_compute[247421]: 2026-01-26 18:09:41.927 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:41 np0005596060 nova_compute[247421]: 2026-01-26 18:09:41.927 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 26 13:09:42 np0005596060 nova_compute[247421]: 2026-01-26 18:09:42.190 247428 DEBUG nova.compute.resource_tracker [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Migration for instance 4c4b2733-13a7-49fe-bbfb-f3e063298716 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 26 13:09:42 np0005596060 nova_compute[247421]: 2026-01-26 18:09:42.227 247428 DEBUG nova.compute.resource_tracker [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 26 13:09:42 np0005596060 nova_compute[247421]: 2026-01-26 18:09:42.266 247428 DEBUG nova.compute.resource_tracker [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Instance e40120ae-eb4e-4f0b-9d8f-f0210de78c4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:09:42 np0005596060 nova_compute[247421]: 2026-01-26 18:09:42.266 247428 DEBUG nova.compute.resource_tracker [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Migration 152f8f11-07d7-44a8-a790-2be851474e39 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 26 13:09:42 np0005596060 nova_compute[247421]: 2026-01-26 18:09:42.267 247428 DEBUG nova.compute.resource_tracker [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:09:42 np0005596060 nova_compute[247421]: 2026-01-26 18:09:42.267 247428 DEBUG nova.compute.resource_tracker [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:09:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:42.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:42 np0005596060 nova_compute[247421]: 2026-01-26 18:09:42.357 247428 DEBUG oslo_concurrency.processutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:09:43 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601800338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:09:43 np0005596060 nova_compute[247421]: 2026-01-26 18:09:43.391 247428 DEBUG oslo_concurrency.processutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:43 np0005596060 nova_compute[247421]: 2026-01-26 18:09:43.400 247428 DEBUG nova.compute.provider_tree [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:09:43 np0005596060 nova_compute[247421]: 2026-01-26 18:09:43.423 247428 DEBUG nova.scheduler.client.report [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:09:43 np0005596060 nova_compute[247421]: 2026-01-26 18:09:43.426 247428 DEBUG nova.compute.resource_tracker [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:09:43 np0005596060 nova_compute[247421]: 2026-01-26 18:09:43.426 247428 DEBUG oslo_concurrency.lockutils [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:43 np0005596060 nova_compute[247421]: 2026-01-26 18:09:43.436 247428 INFO nova.compute.manager [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Jan 26 13:09:43 np0005596060 nova_compute[247421]: 2026-01-26 18:09:43.533 247428 INFO nova.scheduler.client.report [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] Deleted allocation for migration 152f8f11-07d7-44a8-a790-2be851474e39#033[00m
Jan 26 13:09:43 np0005596060 nova_compute[247421]: 2026-01-26 18:09:43.535 247428 DEBUG nova.virt.libvirt.driver [None req-0e651a6d-bbb5-488a-bbe8-7c3372884e76 430881eef73e44a38752c2354824111c 9a36b7a9c98845ffaadadf6d0a7eb3a8 - - default default] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 26 13:09:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:43.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:09:44
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'images', 'vms', 'backups', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root']
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:09:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:09:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:44.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:09:44 np0005596060 ceph-mgr[74563]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2716354406
Jan 26 13:09:45 np0005596060 nova_compute[247421]: 2026-01-26 18:09:45.038 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:45 np0005596060 nova_compute[247421]: 2026-01-26 18:09:45.604 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:45.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 1023 B/s wr, 0 op/s
Jan 26 13:09:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:47.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Jan 26 13:09:48 np0005596060 nova_compute[247421]: 2026-01-26 18:09:48.368 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769450973.3675363, 4c4b2733-13a7-49fe-bbfb-f3e063298716 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:09:48 np0005596060 nova_compute[247421]: 2026-01-26 18:09:48.368 247428 INFO nova.compute.manager [-] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:09:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:48.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:48 np0005596060 nova_compute[247421]: 2026-01-26 18:09:48.457 247428 DEBUG nova.compute.manager [None req-18b01069-941d-4528-81c2-5568059b6e0c - - - - - -] [instance: 4c4b2733-13a7-49fe-bbfb-f3e063298716] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:09:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.327 247428 DEBUG oslo_concurrency.lockutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Acquiring lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.328 247428 DEBUG oslo_concurrency.lockutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.328 247428 DEBUG oslo_concurrency.lockutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Acquiring lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.328 247428 DEBUG oslo_concurrency.lockutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.328 247428 DEBUG oslo_concurrency.lockutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.329 247428 INFO nova.compute.manager [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Terminating instance#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.331 247428 DEBUG nova.compute.manager [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:09:49 np0005596060 kernel: tap06538465-e3 (unregistering): left promiscuous mode
Jan 26 13:09:49 np0005596060 NetworkManager[48900]: <info>  [1769450989.3877] device (tap06538465-e3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:09:49 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:49Z|00071|binding|INFO|Releasing lport 06538465-e309-4216-af1a-244565d3805b from this chassis (sb_readonly=0)
Jan 26 13:09:49 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:49Z|00072|binding|INFO|Setting lport 06538465-e309-4216-af1a-244565d3805b down in Southbound
Jan 26 13:09:49 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:49Z|00073|binding|INFO|Releasing lport 8efebc34-f8eb-42e5-af94-78e84c0dcbba from this chassis (sb_readonly=0)
Jan 26 13:09:49 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:49Z|00074|binding|INFO|Setting lport 8efebc34-f8eb-42e5-af94-78e84c0dcbba down in Southbound
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.394 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:49Z|00075|binding|INFO|Removing iface tap06538465-e3 ovn-installed in OVS
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.396 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:49Z|00076|binding|INFO|Releasing lport 46cfbba6-430a-495c-9d6a-60cf58c877d3 from this chassis (sb_readonly=0)
Jan 26 13:09:49 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:49Z|00077|binding|INFO|Releasing lport ec5ab65e-333c-4443-bd37-b74fa484479e from this chassis (sb_readonly=0)
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.401 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:48:ae 10.100.0.14'], port_security=['fa:16:3e:35:48:ae 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-1321931442', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'e40120ae-eb4e-4f0b-9d8f-f0210de78c4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0516cc55-93b8-4bf2-b595-d07702fa255b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-1321931442', 'neutron:project_id': 'b1f2cad350784d7eae39fc23fb032500', 'neutron:revision_number': '11', 'neutron:security_group_ids': '4e1bd851-4cc2-4677-be2e-39f74460bffd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=db9bad5b-1a88-4481-85c1-c131f59dea19, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=06538465-e309-4216-af1a-244565d3805b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.403 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:69:fa 19.80.0.72'], port_security=['fa:16:3e:c6:69:fa 19.80.0.72'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['06538465-e309-4216-af1a-244565d3805b'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-2075617635', 'neutron:cidrs': '19.80.0.72/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebb9e0b4-8385-462a-84cc-87c6f72c0c65', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-2075617635', 'neutron:project_id': 'b1f2cad350784d7eae39fc23fb032500', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4e1bd851-4cc2-4677-be2e-39f74460bffd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=75dd0954-cbf3-4a3e-a6ef-19fcd101cc5d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=8efebc34-f8eb-42e5-af94-78e84c0dcbba) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.404 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 06538465-e309-4216-af1a-244565d3805b in datapath 0516cc55-93b8-4bf2-b595-d07702fa255b unbound from our chassis#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.406 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0516cc55-93b8-4bf2-b595-d07702fa255b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.407 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce1c3c9-7ffa-49b7-8d67-7dd6332f5739]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.407 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b namespace which is not needed anymore#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.426 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 26 13:09:49 np0005596060 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000005.scope: Consumed 6.402s CPU time.
Jan 26 13:09:49 np0005596060 systemd-machined[213879]: Machine qemu-4-instance-00000005 terminated.
Jan 26 13:09:49 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:49Z|00078|binding|INFO|Releasing lport 46cfbba6-430a-495c-9d6a-60cf58c877d3 from this chassis (sb_readonly=0)
Jan 26 13:09:49 np0005596060 ovn_controller[148842]: 2026-01-26T18:09:49Z|00079|binding|INFO|Releasing lport ec5ab65e-333c-4443-bd37-b74fa484479e from this chassis (sb_readonly=0)
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.495 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b[256749]: [NOTICE]   (256753) : haproxy version is 2.8.14-c23fe91
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b[256749]: [NOTICE]   (256753) : path to executable is /usr/sbin/haproxy
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b[256749]: [WARNING]  (256753) : Exiting Master process...
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b[256749]: [WARNING]  (256753) : Exiting Master process...
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b[256749]: [ALERT]    (256753) : Current worker (256755) exited with code 143 (Terminated)
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b[256749]: [WARNING]  (256753) : All workers exited. Exiting... (0)
Jan 26 13:09:49 np0005596060 systemd[1]: libpod-10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4.scope: Deactivated successfully.
Jan 26 13:09:49 np0005596060 podman[258866]: 2026-01-26 18:09:49.539963922 +0000 UTC m=+0.046308804 container died 10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.550 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.556 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.569 247428 INFO nova.virt.libvirt.driver [-] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Instance destroyed successfully.#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.570 247428 DEBUG nova.objects.instance [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Lazy-loading 'resources' on Instance uuid e40120ae-eb4e-4f0b-9d8f-f0210de78c4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:09:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4-userdata-shm.mount: Deactivated successfully.
Jan 26 13:09:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b1493ff747d35994ace7394a1fe08102e85ca6f8fe76708c18a10fd116c695f1-merged.mount: Deactivated successfully.
Jan 26 13:09:49 np0005596060 podman[258866]: 2026-01-26 18:09:49.589048414 +0000 UTC m=+0.095393296 container cleanup 10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:09:49 np0005596060 systemd[1]: libpod-conmon-10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4.scope: Deactivated successfully.
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.643 247428 DEBUG nova.virt.libvirt.vif [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T18:07:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1296850176',display_name='tempest-LiveMigrationTest-server-1296850176',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1296850176',id=5,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:07:36Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b1f2cad350784d7eae39fc23fb032500',ramdisk_id='',reservation_id='r-02y9chrd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-877386369',owner_user_name='tempest-LiveMigrationTest-877386369-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:08:10Z,user_data=None,user_id='9e3f505042e7463683259f02e8e59eca',uuid=e40120ae-eb4e-4f0b-9d8f-f0210de78c4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06538465-e309-4216-af1a-244565d3805b", "address": "fa:16:3e:35:48:ae", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06538465-e3", "ovs_interfaceid": "06538465-e309-4216-af1a-244565d3805b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.643 247428 DEBUG nova.network.os_vif_util [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Converting VIF {"id": "06538465-e309-4216-af1a-244565d3805b", "address": "fa:16:3e:35:48:ae", "network": {"id": "0516cc55-93b8-4bf2-b595-d07702fa255b", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1766120094-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b1f2cad350784d7eae39fc23fb032500", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06538465-e3", "ovs_interfaceid": "06538465-e309-4216-af1a-244565d3805b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.644 247428 DEBUG nova.network.os_vif_util [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:35:48:ae,bridge_name='br-int',has_traffic_filtering=True,id=06538465-e309-4216-af1a-244565d3805b,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap06538465-e3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.644 247428 DEBUG os_vif [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:48:ae,bridge_name='br-int',has_traffic_filtering=True,id=06538465-e309-4216-af1a-244565d3805b,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap06538465-e3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.646 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.646 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06538465-e3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.650 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.655 247428 INFO os_vif [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:48:ae,bridge_name='br-int',has_traffic_filtering=True,id=06538465-e309-4216-af1a-244565d3805b,network=Network(0516cc55-93b8-4bf2-b595-d07702fa255b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap06538465-e3')#033[00m
Jan 26 13:09:49 np0005596060 podman[258905]: 2026-01-26 18:09:49.656643623 +0000 UTC m=+0.044906900 container remove 10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.663 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[45a8da44-d28f-41fd-b597-0e897534f9e5]: (4, ('Mon Jan 26 06:09:49 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b (10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4)\n10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4\nMon Jan 26 06:09:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b (10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4)\n10e1ea732b76d48bd2c66a33e7c8d2f87816efe20106b846838307b19c19a0f4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.666 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ab7ed7-4e13-434e-bf85-f79e6e6b4325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.667 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0516cc55-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:49 np0005596060 kernel: tap0516cc55-90: left promiscuous mode
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.677 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.681 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.684 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f319e9-3724-46a0-860a-70b21b5d193a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.701 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[30c24d2f-a0f3-403a-840a-3dd9949a17bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.702 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[cd060e4a-f92e-4f67-b2b1-09df0ba88517]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.719 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7540af73-7cd5-48d8-aaea-0dbf36df7678]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464557, 'reachable_time': 18379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258938, 'error': None, 'target': 'ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 systemd[1]: run-netns-ovnmeta\x2d0516cc55\x2d93b8\x2d4bf2\x2db595\x2dd07702fa255b.mount: Deactivated successfully.
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.722 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0516cc55-93b8-4bf2-b595-d07702fa255b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.722 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[1190bf2f-3f10-4ba9-9c9b-cc6f5e98c70d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.723 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 8efebc34-f8eb-42e5-af94-78e84c0dcbba in datapath ebb9e0b4-8385-462a-84cc-87c6f72c0c65 unbound from our chassis#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.725 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebb9e0b4-8385-462a-84cc-87c6f72c0c65, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.725 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[71cda23e-aa4b-47b7-bb59-a54390bcb150]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.726 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65 namespace which is not needed anymore#033[00m
Jan 26 13:09:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:49.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65[256909]: [NOTICE]   (256919) : haproxy version is 2.8.14-c23fe91
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65[256909]: [NOTICE]   (256919) : path to executable is /usr/sbin/haproxy
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65[256909]: [WARNING]  (256919) : Exiting Master process...
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65[256909]: [ALERT]    (256919) : Current worker (256937) exited with code 143 (Terminated)
Jan 26 13:09:49 np0005596060 neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65[256909]: [WARNING]  (256919) : All workers exited. Exiting... (0)
Jan 26 13:09:49 np0005596060 systemd[1]: libpod-7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6.scope: Deactivated successfully.
Jan 26 13:09:49 np0005596060 conmon[256909]: conmon 7571b00ebfd15846a528 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6.scope/container/memory.events
Jan 26 13:09:49 np0005596060 podman[258958]: 2026-01-26 18:09:49.869246013 +0000 UTC m=+0.049341210 container died 7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.870 247428 DEBUG nova.compute.manager [req-167a13a4-7f6b-4b80-b841-0cb4a22a930d req-53e96d69-6895-49f8-8e79-c3666053f5fb 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Received event network-vif-unplugged-06538465-e309-4216-af1a-244565d3805b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.871 247428 DEBUG oslo_concurrency.lockutils [req-167a13a4-7f6b-4b80-b841-0cb4a22a930d req-53e96d69-6895-49f8-8e79-c3666053f5fb 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.871 247428 DEBUG oslo_concurrency.lockutils [req-167a13a4-7f6b-4b80-b841-0cb4a22a930d req-53e96d69-6895-49f8-8e79-c3666053f5fb 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.871 247428 DEBUG oslo_concurrency.lockutils [req-167a13a4-7f6b-4b80-b841-0cb4a22a930d req-53e96d69-6895-49f8-8e79-c3666053f5fb 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.871 247428 DEBUG nova.compute.manager [req-167a13a4-7f6b-4b80-b841-0cb4a22a930d req-53e96d69-6895-49f8-8e79-c3666053f5fb 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] No waiting events found dispatching network-vif-unplugged-06538465-e309-4216-af1a-244565d3805b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.872 247428 DEBUG nova.compute.manager [req-167a13a4-7f6b-4b80-b841-0cb4a22a930d req-53e96d69-6895-49f8-8e79-c3666053f5fb 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Received event network-vif-unplugged-06538465-e309-4216-af1a-244565d3805b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:09:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6-userdata-shm.mount: Deactivated successfully.
Jan 26 13:09:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay-af08287eaebd49609f435f1cea3753e275846d56a19fdf30393e40fad7e30fea-merged.mount: Deactivated successfully.
Jan 26 13:09:49 np0005596060 podman[258958]: 2026-01-26 18:09:49.905359585 +0000 UTC m=+0.085454782 container cleanup 7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 13:09:49 np0005596060 systemd[1]: libpod-conmon-7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6.scope: Deactivated successfully.
Jan 26 13:09:49 np0005596060 podman[258986]: 2026-01-26 18:09:49.967778886 +0000 UTC m=+0.042459210 container remove 7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.973 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[1545627c-5701-4ab3-a932-459fe70a6d14]: (4, ('Mon Jan 26 06:09:49 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65 (7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6)\n7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6\nMon Jan 26 06:09:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65 (7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6)\n7571b00ebfd15846a5285693694e7d61cd64a7e0f9ada481883e48cd463b73f6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.974 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5063b558-842b-4438-98d1-c66c3d6af387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.975 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebb9e0b4-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.976 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 kernel: tapebb9e0b4-80: left promiscuous mode
Jan 26 13:09:49 np0005596060 nova_compute[247421]: 2026-01-26 18:09:49.990 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:49.992 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0f853c74-856a-49db-bfd0-4b2892b72185]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:50.007 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[af4307d8-63aa-48ae-a560-549e88518786]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:50.008 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7909a6af-2499-43a5-ac83-8b98e7222330]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 200 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 26 13:09:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:50.028 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f166d47b-cd1c-4d91-a96d-a5a7a137eb80]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464659, 'reachable_time': 17546, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259000, 'error': None, 'target': 'ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:50.031 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebb9e0b4-8385-462a-84cc-87c6f72c0c65 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:09:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:50.031 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[ca5a8a23-7b4f-4e27-895f-d6f06aa4882e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:09:50 np0005596060 nova_compute[247421]: 2026-01-26 18:09:50.131 247428 INFO nova.virt.libvirt.driver [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Deleting instance files /var/lib/nova/instances/e40120ae-eb4e-4f0b-9d8f-f0210de78c4f_del#033[00m
Jan 26 13:09:50 np0005596060 nova_compute[247421]: 2026-01-26 18:09:50.131 247428 INFO nova.virt.libvirt.driver [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Deletion of /var/lib/nova/instances/e40120ae-eb4e-4f0b-9d8f-f0210de78c4f_del complete#033[00m
Jan 26 13:09:50 np0005596060 nova_compute[247421]: 2026-01-26 18:09:50.193 247428 INFO nova.compute.manager [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:09:50 np0005596060 nova_compute[247421]: 2026-01-26 18:09:50.194 247428 DEBUG oslo.service.loopingcall [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:09:50 np0005596060 nova_compute[247421]: 2026-01-26 18:09:50.194 247428 DEBUG nova.compute.manager [-] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:09:50 np0005596060 nova_compute[247421]: 2026-01-26 18:09:50.194 247428 DEBUG nova.network.neutron [-] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:09:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:50.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:50 np0005596060 systemd[1]: run-netns-ovnmeta\x2debb9e0b4\x2d8385\x2d462a\x2d84cc\x2d87c6f72c0c65.mount: Deactivated successfully.
Jan 26 13:09:50 np0005596060 nova_compute[247421]: 2026-01-26 18:09:50.606 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:51.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.812 247428 DEBUG nova.network.neutron [-] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.833 247428 INFO nova.compute.manager [-] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Took 1.64 seconds to deallocate network for instance.#033[00m
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.883 247428 DEBUG oslo_concurrency.lockutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.884 247428 DEBUG oslo_concurrency.lockutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.930 247428 DEBUG oslo_concurrency.processutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.985 247428 DEBUG nova.compute.manager [req-17c0b219-512a-4e5b-acac-af92872386ec req-27921050-d032-4eb7-bfba-776408140885 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Received event network-vif-plugged-06538465-e309-4216-af1a-244565d3805b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.986 247428 DEBUG oslo_concurrency.lockutils [req-17c0b219-512a-4e5b-acac-af92872386ec req-27921050-d032-4eb7-bfba-776408140885 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.986 247428 DEBUG oslo_concurrency.lockutils [req-17c0b219-512a-4e5b-acac-af92872386ec req-27921050-d032-4eb7-bfba-776408140885 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.986 247428 DEBUG oslo_concurrency.lockutils [req-17c0b219-512a-4e5b-acac-af92872386ec req-27921050-d032-4eb7-bfba-776408140885 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.987 247428 DEBUG nova.compute.manager [req-17c0b219-512a-4e5b-acac-af92872386ec req-27921050-d032-4eb7-bfba-776408140885 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] No waiting events found dispatching network-vif-plugged-06538465-e309-4216-af1a-244565d3805b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:09:51 np0005596060 nova_compute[247421]: 2026-01-26 18:09:51.987 247428 WARNING nova.compute.manager [req-17c0b219-512a-4e5b-acac-af92872386ec req-27921050-d032-4eb7-bfba-776408140885 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Received unexpected event network-vif-plugged-06538465-e309-4216-af1a-244565d3805b for instance with vm_state deleted and task_state None.#033[00m
Jan 26 13:09:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 161 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.7 KiB/s wr, 23 op/s
Jan 26 13:09:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:09:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2153476980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:09:52 np0005596060 nova_compute[247421]: 2026-01-26 18:09:52.335 247428 DEBUG oslo_concurrency.processutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:52 np0005596060 nova_compute[247421]: 2026-01-26 18:09:52.342 247428 DEBUG nova.compute.provider_tree [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:09:52 np0005596060 nova_compute[247421]: 2026-01-26 18:09:52.362 247428 DEBUG nova.scheduler.client.report [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:09:52 np0005596060 nova_compute[247421]: 2026-01-26 18:09:52.402 247428 DEBUG oslo_concurrency.lockutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:52 np0005596060 nova_compute[247421]: 2026-01-26 18:09:52.435 247428 INFO nova.scheduler.client.report [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Deleted allocations for instance e40120ae-eb4e-4f0b-9d8f-f0210de78c4f#033[00m
Jan 26 13:09:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:52.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:52 np0005596060 nova_compute[247421]: 2026-01-26 18:09:52.507 247428 DEBUG oslo_concurrency.lockutils [None req-001097d0-ac49-4210-a185-ab2db8f93e01 9e3f505042e7463683259f02e8e59eca b1f2cad350784d7eae39fc23fb032500 - - default default] Lock "e40120ae-eb4e-4f0b-9d8f-f0210de78c4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:53.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 57 op/s
Jan 26 13:09:54 np0005596060 nova_compute[247421]: 2026-01-26 18:09:54.021 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:54.021 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:09:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:09:54.022 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:09:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:54.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:54 np0005596060 nova_compute[247421]: 2026-01-26 18:09:54.648 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.143 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.144 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.144 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.145 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.527 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.527 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.547 247428 DEBUG nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.614 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.614 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.621 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.622 247428 INFO nova.compute.claims [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.643 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:55 np0005596060 nova_compute[247421]: 2026-01-26 18:09:55.742 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:55.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 57 op/s
Jan 26 13:09:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:09:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/580907144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.193 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.200 247428 DEBUG nova.compute.provider_tree [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.266 247428 DEBUG nova.scheduler.client.report [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.302 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.303 247428 DEBUG nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.351 247428 DEBUG nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.352 247428 DEBUG nova.network.neutron [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.367 247428 INFO nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.384 247428 DEBUG nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:09:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:09:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:56.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.487 247428 DEBUG nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.489 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.489 247428 INFO nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Creating image(s)#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.512 247428 DEBUG nova.storage.rbd_utils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] rbd image 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.545 247428 DEBUG nova.storage.rbd_utils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] rbd image 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.573 247428 DEBUG nova.storage.rbd_utils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] rbd image 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.576 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.600 247428 DEBUG nova.policy [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f068f3a0c9ff42b7b9b2f9c46340f94a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ad07a7cadd1f4901881fdc108d68e6a6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.637 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.638 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.638 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.638 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.666 247428 DEBUG nova.storage.rbd_utils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] rbd image 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.670 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.697 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.716 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.716 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.717 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.750 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.750 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:09:56 np0005596060 nova_compute[247421]: 2026-01-26 18:09:56.750 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:09:57 np0005596060 nova_compute[247421]: 2026-01-26 18:09:57.593 247428 DEBUG nova.network.neutron [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Successfully created port: 35e49e51-0be6-4711-8885-8e7b05fcbd88 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:09:57 np0005596060 nova_compute[247421]: 2026-01-26 18:09:57.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:09:57 np0005596060 nova_compute[247421]: 2026-01-26 18:09:57.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:09:57 np0005596060 nova_compute[247421]: 2026-01-26 18:09:57.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:09:57 np0005596060 nova_compute[247421]: 2026-01-26 18:09:57.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:09:57 np0005596060 nova_compute[247421]: 2026-01-26 18:09:57.679 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:57 np0005596060 nova_compute[247421]: 2026-01-26 18:09:57.679 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:57 np0005596060 nova_compute[247421]: 2026-01-26 18:09:57.680 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:57 np0005596060 nova_compute[247421]: 2026-01-26 18:09:57.680 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:09:57 np0005596060 nova_compute[247421]: 2026-01-26 18:09:57.680 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:57.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 5.0 KiB/s wr, 70 op/s
Jan 26 13:09:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:09:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:09:58.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:09:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:09:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3234165688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:09:58 np0005596060 nova_compute[247421]: 2026-01-26 18:09:58.742 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:58 np0005596060 nova_compute[247421]: 2026-01-26 18:09:58.990 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:09:58 np0005596060 nova_compute[247421]: 2026-01-26 18:09:58.994 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4785MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:09:58 np0005596060 nova_compute[247421]: 2026-01-26 18:09:58.994 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:09:58 np0005596060 nova_compute[247421]: 2026-01-26 18:09:58.995 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.092 247428 DEBUG nova.network.neutron [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Successfully updated port: 35e49e51-0be6-4711-8885-8e7b05fcbd88 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.113 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.114 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquired lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.114 247428 DEBUG nova.network.neutron [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.116 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.117 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.117 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.180 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.244 247428 DEBUG nova.compute.manager [req-bf3c2339-3497-4a75-a55f-2e0e20a31884 req-d57f774f-aced-4f81-af91-4ab457de0c1c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Received event network-changed-35e49e51-0be6-4711-8885-8e7b05fcbd88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.246 247428 DEBUG nova.compute.manager [req-bf3c2339-3497-4a75-a55f-2e0e20a31884 req-d57f774f-aced-4f81-af91-4ab457de0c1c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Refreshing instance network info cache due to event network-changed-35e49e51-0be6-4711-8885-8e7b05fcbd88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.246 247428 DEBUG oslo_concurrency.lockutils [req-bf3c2339-3497-4a75-a55f-2e0e20a31884 req-d57f774f-aced-4f81-af91-4ab457de0c1c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:09:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.275 247428 DEBUG nova.network.neutron [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.651 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:09:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:09:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605314938' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.685 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.691 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.742 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.773 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.774 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:09:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:09:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:09:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:09:59.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:09:59 np0005596060 nova_compute[247421]: 2026-01-26 18:09:59.993 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:10:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 13:10:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 41 MiB data, 258 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 4.4 KiB/s wr, 55 op/s
Jan 26 13:10:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:00.024 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:10:00 np0005596060 nova_compute[247421]: 2026-01-26 18:10:00.124 247428 DEBUG nova.storage.rbd_utils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] resizing rbd image 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:10:00 np0005596060 nova_compute[247421]: 2026-01-26 18:10:00.315 247428 DEBUG nova.network.neutron [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Updating instance_info_cache with network_info: [{"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:10:00 np0005596060 nova_compute[247421]: 2026-01-26 18:10:00.333 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Releasing lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:10:00 np0005596060 nova_compute[247421]: 2026-01-26 18:10:00.334 247428 DEBUG nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Instance network_info: |[{"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:10:00 np0005596060 nova_compute[247421]: 2026-01-26 18:10:00.335 247428 DEBUG oslo_concurrency.lockutils [req-bf3c2339-3497-4a75-a55f-2e0e20a31884 req-d57f774f-aced-4f81-af91-4ab457de0c1c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:10:00 np0005596060 nova_compute[247421]: 2026-01-26 18:10:00.335 247428 DEBUG nova.network.neutron [req-bf3c2339-3497-4a75-a55f-2e0e20a31884 req-d57f774f-aced-4f81-af91-4ab457de0c1c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Refreshing network info cache for port 35e49e51-0be6-4711-8885-8e7b05fcbd88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:10:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:00.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:00 np0005596060 nova_compute[247421]: 2026-01-26 18:10:00.645 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:01.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 58 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 359 KiB/s wr, 61 op/s
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.066 247428 DEBUG nova.objects.instance [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lazy-loading 'migration_context' on Instance uuid 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.085 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.085 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Ensure instance console log exists: /var/lib/nova/instances/1bd1db7a-82d9-4a81-9b92-a7e83f037a99/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.086 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.087 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.087 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.091 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Start _get_guest_xml network_info=[{"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.097 247428 WARNING nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.103 247428 DEBUG nova.virt.libvirt.host [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.105 247428 DEBUG nova.virt.libvirt.host [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.109 247428 DEBUG nova.virt.libvirt.host [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.110 247428 DEBUG nova.virt.libvirt.host [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.111 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.112 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.113 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.113 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.114 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.114 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.115 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.115 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.116 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.116 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.117 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.117 247428 DEBUG nova.virt.hardware [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.122 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:10:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:10:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:02.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.503 247428 DEBUG nova.network.neutron [req-bf3c2339-3497-4a75-a55f-2e0e20a31884 req-d57f774f-aced-4f81-af91-4ab457de0c1c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Updated VIF entry in instance network info cache for port 35e49e51-0be6-4711-8885-8e7b05fcbd88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.504 247428 DEBUG nova.network.neutron [req-bf3c2339-3497-4a75-a55f-2e0e20a31884 req-d57f774f-aced-4f81-af91-4ab457de0c1c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Updating instance_info_cache with network_info: [{"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.520 247428 DEBUG oslo_concurrency.lockutils [req-bf3c2339-3497-4a75-a55f-2e0e20a31884 req-d57f774f-aced-4f81-af91-4ab457de0c1c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:10:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:10:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646194285' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.605 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:10:02 np0005596060 ceph-mon[74267]: overall HEALTH_OK
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.636 247428 DEBUG nova.storage.rbd_utils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] rbd image 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:10:02 np0005596060 nova_compute[247421]: 2026-01-26 18:10:02.640 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:10:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:10:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3292659379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.167 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.169 247428 DEBUG nova.virt.libvirt.vif [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-2124185016',display_name='tempest-VolumesAdminNegativeTest-server-2124185016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-2124185016',id=8,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClLUtfR1uu2JLUDRIYDocw+9Php6VwyabQJFyw2OtGOxku7MMkPS6LmEPxQqFehHHH6Buivw8cDrVSKa2LN1KXPd5vuFSk9DDTWl1VbRGeOBSt5mEZy9zm49Isaulay8A==',key_name='tempest-keypair-467686804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad07a7cadd1f4901881fdc108d68e6a6',ramdisk_id='',reservation_id='r-ip2529z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-2093520678',owner_user_name='tempest-VolumesAdminNegativeTest-2093520678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:09:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f068f3a0c9ff42b7b9b2f9c46340f94a',uuid=1bd1db7a-82d9-4a81-9b92-a7e83f037a99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.169 247428 DEBUG nova.network.os_vif_util [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Converting VIF {"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.170 247428 DEBUG nova.network.os_vif_util [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:aa:5b,bridge_name='br-int',has_traffic_filtering=True,id=35e49e51-0be6-4711-8885-8e7b05fcbd88,network=Network(eb84b7b2-62ef-4f88-96f7-b0cf584d02d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35e49e51-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.172 247428 DEBUG nova.objects.instance [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.255 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <uuid>1bd1db7a-82d9-4a81-9b92-a7e83f037a99</uuid>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <name>instance-00000008</name>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <nova:name>tempest-VolumesAdminNegativeTest-server-2124185016</nova:name>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:10:02</nova:creationTime>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <nova:user uuid="f068f3a0c9ff42b7b9b2f9c46340f94a">tempest-VolumesAdminNegativeTest-2093520678-project-member</nova:user>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <nova:project uuid="ad07a7cadd1f4901881fdc108d68e6a6">tempest-VolumesAdminNegativeTest-2093520678</nova:project>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <nova:port uuid="35e49e51-0be6-4711-8885-8e7b05fcbd88">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <entry name="serial">1bd1db7a-82d9-4a81-9b92-a7e83f037a99</entry>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <entry name="uuid">1bd1db7a-82d9-4a81-9b92-a7e83f037a99</entry>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk.config">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:65:aa:5b"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <target dev="tap35e49e51-0b"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/1bd1db7a-82d9-4a81-9b92-a7e83f037a99/console.log" append="off"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:10:03 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:10:03 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:10:03 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:10:03 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.257 247428 DEBUG nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Preparing to wait for external event network-vif-plugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.257 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.258 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.258 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.259 247428 DEBUG nova.virt.libvirt.vif [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-2124185016',display_name='tempest-VolumesAdminNegativeTest-server-2124185016',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-2124185016',id=8,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClLUtfR1uu2JLUDRIYDocw+9Php6VwyabQJFyw2OtGOxku7MMkPS6LmEPxQqFehHHH6Buivw8cDrVSKa2LN1KXPd5vuFSk9DDTWl1VbRGeOBSt5mEZy9zm49Isaulay8A==',key_name='tempest-keypair-467686804',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ad07a7cadd1f4901881fdc108d68e6a6',ramdisk_id='',reservation_id='r-ip2529z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesAdminNegativeTest-2093520678',owner_user_name='tempest-VolumesAdminNegativeTest-2093520678-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:09:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f068f3a0c9ff42b7b9b2f9c46340f94a',uuid=1bd1db7a-82d9-4a81-9b92-a7e83f037a99,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.259 247428 DEBUG nova.network.os_vif_util [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Converting VIF {"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.260 247428 DEBUG nova.network.os_vif_util [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:aa:5b,bridge_name='br-int',has_traffic_filtering=True,id=35e49e51-0be6-4711-8885-8e7b05fcbd88,network=Network(eb84b7b2-62ef-4f88-96f7-b0cf584d02d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35e49e51-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.260 247428 DEBUG os_vif [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:aa:5b,bridge_name='br-int',has_traffic_filtering=True,id=35e49e51-0be6-4711-8885-8e7b05fcbd88,network=Network(eb84b7b2-62ef-4f88-96f7-b0cf584d02d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35e49e51-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.261 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.261 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.262 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.264 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.264 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35e49e51-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.265 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35e49e51-0b, col_values=(('external_ids', {'iface-id': '35e49e51-0be6-4711-8885-8e7b05fcbd88', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:aa:5b', 'vm-uuid': '1bd1db7a-82d9-4a81-9b92-a7e83f037a99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.266 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:03 np0005596060 NetworkManager[48900]: <info>  [1769451003.2674] manager: (tap35e49e51-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.268 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.272 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.272 247428 INFO os_vif [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:aa:5b,bridge_name='br-int',has_traffic_filtering=True,id=35e49e51-0be6-4711-8885-8e7b05fcbd88,network=Network(eb84b7b2-62ef-4f88-96f7-b0cf584d02d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35e49e51-0b')#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.521 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.522 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.522 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] No VIF found with MAC fa:16:3e:65:aa:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.523 247428 INFO nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Using config drive#033[00m
Jan 26 13:10:03 np0005596060 nova_compute[247421]: 2026-01-26 18:10:03.619 247428 DEBUG nova.storage.rbd_utils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] rbd image 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00024427694025683976 of space, bias 1.0, pg target 0.07328308207705193 quantized to 32 (current 32)
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:10:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:10:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:03.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 88 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Jan 26 13:10:04 np0005596060 nova_compute[247421]: 2026-01-26 18:10:04.032 247428 INFO nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Creating config drive at /var/lib/nova/instances/1bd1db7a-82d9-4a81-9b92-a7e83f037a99/disk.config#033[00m
Jan 26 13:10:04 np0005596060 nova_compute[247421]: 2026-01-26 18:10:04.039 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1bd1db7a-82d9-4a81-9b92-a7e83f037a99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1z2_8mq1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:10:04 np0005596060 nova_compute[247421]: 2026-01-26 18:10:04.170 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1bd1db7a-82d9-4a81-9b92-a7e83f037a99/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1z2_8mq1" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:10:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:10:04 np0005596060 nova_compute[247421]: 2026-01-26 18:10:04.268 247428 DEBUG nova.storage.rbd_utils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] rbd image 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:10:04 np0005596060 nova_compute[247421]: 2026-01-26 18:10:04.272 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1bd1db7a-82d9-4a81-9b92-a7e83f037a99/disk.config 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:10:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:04.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:04 np0005596060 nova_compute[247421]: 2026-01-26 18:10:04.567 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769450989.5654328, e40120ae-eb4e-4f0b-9d8f-f0210de78c4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:10:04 np0005596060 nova_compute[247421]: 2026-01-26 18:10:04.568 247428 INFO nova.compute.manager [-] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:10:04 np0005596060 nova_compute[247421]: 2026-01-26 18:10:04.590 247428 DEBUG nova.compute.manager [None req-b23fe4cf-11af-4d9d-97b7-cb4b64e4be00 - - - - - -] [instance: e40120ae-eb4e-4f0b-9d8f-f0210de78c4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:10:05 np0005596060 nova_compute[247421]: 2026-01-26 18:10:05.460 247428 DEBUG oslo_concurrency.processutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1bd1db7a-82d9-4a81-9b92-a7e83f037a99/disk.config 1bd1db7a-82d9-4a81-9b92-a7e83f037a99_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:10:05 np0005596060 nova_compute[247421]: 2026-01-26 18:10:05.461 247428 INFO nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Deleting local config drive /var/lib/nova/instances/1bd1db7a-82d9-4a81-9b92-a7e83f037a99/disk.config because it was imported into RBD.#033[00m
Jan 26 13:10:05 np0005596060 kernel: tap35e49e51-0b: entered promiscuous mode
Jan 26 13:10:05 np0005596060 NetworkManager[48900]: <info>  [1769451005.5145] manager: (tap35e49e51-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/45)
Jan 26 13:10:05 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:05Z|00080|binding|INFO|Claiming lport 35e49e51-0be6-4711-8885-8e7b05fcbd88 for this chassis.
Jan 26 13:10:05 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:05Z|00081|binding|INFO|35e49e51-0be6-4711-8885-8e7b05fcbd88: Claiming fa:16:3e:65:aa:5b 10.100.0.3
Jan 26 13:10:05 np0005596060 nova_compute[247421]: 2026-01-26 18:10:05.516 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.541 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:aa:5b 10.100.0.3'], port_security=['fa:16:3e:65:aa:5b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1bd1db7a-82d9-4a81-9b92-a7e83f037a99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad07a7cadd1f4901881fdc108d68e6a6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '93271727-8424-4c89-815a-21c72a8dd57e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e48988c5-f10f-4194-b7bb-b82637902bf0, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=35e49e51-0be6-4711-8885-8e7b05fcbd88) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.542 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 35e49e51-0be6-4711-8885-8e7b05fcbd88 in datapath eb84b7b2-62ef-4f88-96f7-b0cf584d02d7 bound to our chassis#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.544 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eb84b7b2-62ef-4f88-96f7-b0cf584d02d7#033[00m
Jan 26 13:10:05 np0005596060 systemd-udevd[259395]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:10:05 np0005596060 NetworkManager[48900]: <info>  [1769451005.5592] device (tap35e49e51-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:10:05 np0005596060 NetworkManager[48900]: <info>  [1769451005.5598] device (tap35e49e51-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.560 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[4da96742-5827-44a1-8643-1eff3e204c31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.560 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeb84b7b2-61 in ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.563 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeb84b7b2-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.564 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[131d98da-faa6-4743-8693-63b07c62d56b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.565 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[39655b2d-addd-44d2-8264-422b95a52da6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.581 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[f33b004d-ffca-4b7a-96fb-6fc82a08c2cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 nova_compute[247421]: 2026-01-26 18:10:05.582 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:05 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:05Z|00082|binding|INFO|Setting lport 35e49e51-0be6-4711-8885-8e7b05fcbd88 ovn-installed in OVS
Jan 26 13:10:05 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:05Z|00083|binding|INFO|Setting lport 35e49e51-0be6-4711-8885-8e7b05fcbd88 up in Southbound
Jan 26 13:10:05 np0005596060 nova_compute[247421]: 2026-01-26 18:10:05.590 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:05 np0005596060 systemd-machined[213879]: New machine qemu-6-instance-00000008.
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.606 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5248836a-bf86-4f6f-825b-d2c35875351b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 systemd[1]: Started Virtual Machine qemu-6-instance-00000008.
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.636 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[ce686d68-f069-4b7f-abb3-101010176701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 systemd-udevd[259398]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:10:05 np0005596060 NetworkManager[48900]: <info>  [1769451005.6460] manager: (tapeb84b7b2-60): new Veth device (/org/freedesktop/NetworkManager/Devices/46)
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.643 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a44e1f41-c23d-4f57-9973-9e444a08bc55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.674 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[300bd9b9-8d72-42ab-a732-afaae6a0a6a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.677 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[26849466-5f0e-4c15-a5db-2ab5ba41f581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 nova_compute[247421]: 2026-01-26 18:10:05.685 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:05 np0005596060 NetworkManager[48900]: <info>  [1769451005.7054] device (tapeb84b7b2-60): carrier: link connected
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.710 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[99e24d79-027a-4d8c-8118-2f9329063a95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.734 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[032a076b-8079-4748-b69c-225ddfd3aa2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb84b7b2-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:ce:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477028, 'reachable_time': 34458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259430, 'error': None, 'target': 'ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.751 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a81f9141-3513-4565-a78a-0054f9c0f295]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:ce18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 477028, 'tstamp': 477028}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259432, 'error': None, 'target': 'ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.777 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e705bd41-f14d-4d2e-b3d8-2dfad4b199f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeb84b7b2-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:ce:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477028, 'reachable_time': 34458, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259433, 'error': None, 'target': 'ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.817 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[80029adb-6a88-41bd-b2e4-02679253987f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:05.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.900 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e93638be-4483-4967-8133-82ed491b9a57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.901 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb84b7b2-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.902 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.902 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb84b7b2-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:10:05 np0005596060 kernel: tapeb84b7b2-60: entered promiscuous mode
Jan 26 13:10:05 np0005596060 NetworkManager[48900]: <info>  [1769451005.9059] manager: (tapeb84b7b2-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Jan 26 13:10:05 np0005596060 nova_compute[247421]: 2026-01-26 18:10:05.905 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.912 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeb84b7b2-60, col_values=(('external_ids', {'iface-id': '1cce36f8-18d8-4234-91c7-5c746c895e14'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:10:05 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:05Z|00084|binding|INFO|Releasing lport 1cce36f8-18d8-4234-91c7-5c746c895e14 from this chassis (sb_readonly=0)
Jan 26 13:10:05 np0005596060 nova_compute[247421]: 2026-01-26 18:10:05.914 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.928 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eb84b7b2-62ef-4f88-96f7-b0cf584d02d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eb84b7b2-62ef-4f88-96f7-b0cf584d02d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:10:05 np0005596060 nova_compute[247421]: 2026-01-26 18:10:05.929 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.929 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[81a966f9-d145-47ee-bb78-ce1fafac4923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.930 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/eb84b7b2-62ef-4f88-96f7-b0cf584d02d7.pid.haproxy
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID eb84b7b2-62ef-4f88-96f7-b0cf584d02d7
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:10:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:05.930 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7', 'env', 'PROCESS_TAG=haproxy-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eb84b7b2-62ef-4f88-96f7-b0cf584d02d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:10:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 88 MiB data, 276 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 26 13:10:06 np0005596060 podman[259465]: 2026-01-26 18:10:06.252639971 +0000 UTC m=+0.024110987 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:10:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:06.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:06 np0005596060 nova_compute[247421]: 2026-01-26 18:10:06.640 247428 DEBUG nova.compute.manager [req-6a52925c-109a-4759-9c55-d0208add39a4 req-99ff7119-cef6-4acb-8a2c-f768c8ce457c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Received event network-vif-plugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:10:06 np0005596060 nova_compute[247421]: 2026-01-26 18:10:06.641 247428 DEBUG oslo_concurrency.lockutils [req-6a52925c-109a-4759-9c55-d0208add39a4 req-99ff7119-cef6-4acb-8a2c-f768c8ce457c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:10:06 np0005596060 nova_compute[247421]: 2026-01-26 18:10:06.642 247428 DEBUG oslo_concurrency.lockutils [req-6a52925c-109a-4759-9c55-d0208add39a4 req-99ff7119-cef6-4acb-8a2c-f768c8ce457c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:10:06 np0005596060 nova_compute[247421]: 2026-01-26 18:10:06.642 247428 DEBUG oslo_concurrency.lockutils [req-6a52925c-109a-4759-9c55-d0208add39a4 req-99ff7119-cef6-4acb-8a2c-f768c8ce457c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:10:06 np0005596060 nova_compute[247421]: 2026-01-26 18:10:06.642 247428 DEBUG nova.compute.manager [req-6a52925c-109a-4759-9c55-d0208add39a4 req-99ff7119-cef6-4acb-8a2c-f768c8ce457c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Processing event network-vif-plugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:10:06 np0005596060 podman[259465]: 2026-01-26 18:10:06.845946281 +0000 UTC m=+0.617417307 container create 8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.160 247428 DEBUG nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.162 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769451007.1611128, 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.163 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] VM Started (Lifecycle Event)#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.168 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.172 247428 INFO nova.virt.libvirt.driver [-] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Instance spawned successfully.#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.173 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:10:07 np0005596060 systemd[1]: Started libpod-conmon-8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee.scope.
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.178 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.183 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.193 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.194 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.195 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.196 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.196 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.197 247428 DEBUG nova.virt.libvirt.driver [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.202 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.203 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769451007.1612225, 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.203 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:10:07 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:10:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94fabae5be672e17dde853b5f836b6ab66b1d26793329e37ab0ddcd53c0619cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.226 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.231 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769451007.1674652, 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.231 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.251 247428 INFO nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Took 10.76 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.251 247428 DEBUG nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.252 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.259 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.289 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.310 247428 INFO nova.compute.manager [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Took 11.72 seconds to build instance.#033[00m
Jan 26 13:10:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:07.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:07 np0005596060 nova_compute[247421]: 2026-01-26 18:10:07.916 247428 DEBUG oslo_concurrency.lockutils [None req-cf7f5bdc-5101-4b38-85ae-3352ae4d1cc9 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:10:07 np0005596060 podman[259465]: 2026-01-26 18:10:07.994991793 +0000 UTC m=+1.766462819 container init 8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 13:10:08 np0005596060 podman[259465]: 2026-01-26 18:10:08.003510304 +0000 UTC m=+1.774981300 container start 8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 13:10:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 73 op/s
Jan 26 13:10:08 np0005596060 neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7[259535]: [NOTICE]   (259601) : New worker (259605) forked
Jan 26 13:10:08 np0005596060 neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7[259535]: [NOTICE]   (259601) : Loading success.
Jan 26 13:10:08 np0005596060 nova_compute[247421]: 2026-01-26 18:10:08.268 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:08 np0005596060 podman[259515]: 2026-01-26 18:10:08.305393838 +0000 UTC m=+1.427766815 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 13:10:08 np0005596060 podman[259536]: 2026-01-26 18:10:08.346255887 +0000 UTC m=+1.144902262 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:10:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:08.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:08 np0005596060 nova_compute[247421]: 2026-01-26 18:10:08.771 247428 DEBUG nova.compute.manager [req-e7095d89-6609-4059-83ee-70f86e612f22 req-94f4c5b5-3898-4597-a5f3-efef2434e170 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Received event network-vif-plugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:10:08 np0005596060 nova_compute[247421]: 2026-01-26 18:10:08.772 247428 DEBUG oslo_concurrency.lockutils [req-e7095d89-6609-4059-83ee-70f86e612f22 req-94f4c5b5-3898-4597-a5f3-efef2434e170 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:10:08 np0005596060 nova_compute[247421]: 2026-01-26 18:10:08.772 247428 DEBUG oslo_concurrency.lockutils [req-e7095d89-6609-4059-83ee-70f86e612f22 req-94f4c5b5-3898-4597-a5f3-efef2434e170 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:10:08 np0005596060 nova_compute[247421]: 2026-01-26 18:10:08.773 247428 DEBUG oslo_concurrency.lockutils [req-e7095d89-6609-4059-83ee-70f86e612f22 req-94f4c5b5-3898-4597-a5f3-efef2434e170 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:10:08 np0005596060 nova_compute[247421]: 2026-01-26 18:10:08.773 247428 DEBUG nova.compute.manager [req-e7095d89-6609-4059-83ee-70f86e612f22 req-94f4c5b5-3898-4597-a5f3-efef2434e170 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] No waiting events found dispatching network-vif-plugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:10:08 np0005596060 nova_compute[247421]: 2026-01-26 18:10:08.773 247428 WARNING nova.compute.manager [req-e7095d89-6609-4059-83ee-70f86e612f22 req-94f4c5b5-3898-4597-a5f3-efef2434e170 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Received unexpected event network-vif-plugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 for instance with vm_state active and task_state None.#033[00m
Jan 26 13:10:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:10:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:09.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 134 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 60 op/s
Jan 26 13:10:10 np0005596060 NetworkManager[48900]: <info>  [1769451010.2848] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 26 13:10:10 np0005596060 NetworkManager[48900]: <info>  [1769451010.2858] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 26 13:10:10 np0005596060 nova_compute[247421]: 2026-01-26 18:10:10.283 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:10.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:10 np0005596060 nova_compute[247421]: 2026-01-26 18:10:10.525 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:10 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:10Z|00085|binding|INFO|Releasing lport 1cce36f8-18d8-4234-91c7-5c746c895e14 from this chassis (sb_readonly=0)
Jan 26 13:10:10 np0005596060 nova_compute[247421]: 2026-01-26 18:10:10.547 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:10 np0005596060 nova_compute[247421]: 2026-01-26 18:10:10.586 247428 DEBUG nova.compute.manager [req-3a6b0019-3a18-4779-a9eb-5e6fd37fe39d req-29378ca8-2f95-4028-924c-11352a171511 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Received event network-changed-35e49e51-0be6-4711-8885-8e7b05fcbd88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:10:10 np0005596060 nova_compute[247421]: 2026-01-26 18:10:10.586 247428 DEBUG nova.compute.manager [req-3a6b0019-3a18-4779-a9eb-5e6fd37fe39d req-29378ca8-2f95-4028-924c-11352a171511 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Refreshing instance network info cache due to event network-changed-35e49e51-0be6-4711-8885-8e7b05fcbd88. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:10:10 np0005596060 nova_compute[247421]: 2026-01-26 18:10:10.586 247428 DEBUG oslo_concurrency.lockutils [req-3a6b0019-3a18-4779-a9eb-5e6fd37fe39d req-29378ca8-2f95-4028-924c-11352a171511 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:10:10 np0005596060 nova_compute[247421]: 2026-01-26 18:10:10.587 247428 DEBUG oslo_concurrency.lockutils [req-3a6b0019-3a18-4779-a9eb-5e6fd37fe39d req-29378ca8-2f95-4028-924c-11352a171511 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:10:10 np0005596060 nova_compute[247421]: 2026-01-26 18:10:10.587 247428 DEBUG nova.network.neutron [req-3a6b0019-3a18-4779-a9eb-5e6fd37fe39d req-29378ca8-2f95-4028-924c-11352a171511 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Refreshing network info cache for port 35e49e51-0be6-4711-8885-8e7b05fcbd88 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:10:10 np0005596060 nova_compute[247421]: 2026-01-26 18:10:10.686 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:11.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 155 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 91 op/s
Jan 26 13:10:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:13 np0005596060 nova_compute[247421]: 2026-01-26 18:10:13.271 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:13.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 181 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 5.0 MiB/s wr, 155 op/s
Jan 26 13:10:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:10:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:10:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:10:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:10:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:10:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:10:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:10:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:14.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:14.742 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:10:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:14.743 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:10:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:14.743 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:10:15 np0005596060 nova_compute[247421]: 2026-01-26 18:10:15.738 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:15.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 181 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 142 op/s
Jan 26 13:10:16 np0005596060 nova_compute[247421]: 2026-01-26 18:10:16.432 247428 DEBUG nova.network.neutron [req-3a6b0019-3a18-4779-a9eb-5e6fd37fe39d req-29378ca8-2f95-4028-924c-11352a171511 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Updated VIF entry in instance network info cache for port 35e49e51-0be6-4711-8885-8e7b05fcbd88. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:10:16 np0005596060 nova_compute[247421]: 2026-01-26 18:10:16.432 247428 DEBUG nova.network.neutron [req-3a6b0019-3a18-4779-a9eb-5e6fd37fe39d req-29378ca8-2f95-4028-924c-11352a171511 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Updating instance_info_cache with network_info: [{"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:10:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:16.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:16 np0005596060 nova_compute[247421]: 2026-01-26 18:10:16.537 247428 DEBUG oslo_concurrency.lockutils [req-3a6b0019-3a18-4779-a9eb-5e6fd37fe39d req-29378ca8-2f95-4028-924c-11352a171511 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:10:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:10:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:10:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:10:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:17.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:10:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:10:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:10:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:10:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:10:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:10:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 181 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.6 MiB/s wr, 210 op/s
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:10:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8365213b-a414-4a4c-ba9d-71294d3a38da does not exist
Jan 26 13:10:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d2dc2b37-50d7-4e86-be86-52483d8165ec does not exist
Jan 26 13:10:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6b186ba1-e2f9-44ca-9700-b84cc775a37b does not exist
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:10:18 np0005596060 nova_compute[247421]: 2026-01-26 18:10:18.273 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:18.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:18 np0005596060 podman[260031]: 2026-01-26 18:10:18.876503185 +0000 UTC m=+0.080529020 container create b910891101ec6a1962c39c796d92406156498976ff8e98f7d513de750e4c33d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_booth, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:10:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:10:18 np0005596060 podman[260031]: 2026-01-26 18:10:18.818694197 +0000 UTC m=+0.022720022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:10:19 np0005596060 systemd[1]: Started libpod-conmon-b910891101ec6a1962c39c796d92406156498976ff8e98f7d513de750e4c33d0.scope.
Jan 26 13:10:19 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:10:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 26 13:10:19 np0005596060 podman[260031]: 2026-01-26 18:10:19.186293204 +0000 UTC m=+0.390319019 container init b910891101ec6a1962c39c796d92406156498976ff8e98f7d513de750e4c33d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:10:19 np0005596060 podman[260031]: 2026-01-26 18:10:19.194068556 +0000 UTC m=+0.398094351 container start b910891101ec6a1962c39c796d92406156498976ff8e98f7d513de750e4c33d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_booth, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:10:19 np0005596060 keen_booth[260047]: 167 167
Jan 26 13:10:19 np0005596060 systemd[1]: libpod-b910891101ec6a1962c39c796d92406156498976ff8e98f7d513de750e4c33d0.scope: Deactivated successfully.
Jan 26 13:10:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 26 13:10:19 np0005596060 podman[260031]: 2026-01-26 18:10:19.320732194 +0000 UTC m=+0.524758009 container attach b910891101ec6a1962c39c796d92406156498976ff8e98f7d513de750e4c33d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_booth, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 13:10:19 np0005596060 podman[260031]: 2026-01-26 18:10:19.322156379 +0000 UTC m=+0.526182224 container died b910891101ec6a1962c39c796d92406156498976ff8e98f7d513de750e4c33d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Jan 26 13:10:19 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 26 13:10:19 np0005596060 systemd[1]: var-lib-containers-storage-overlay-553ea640be2715d4b269293ae24d58e2f3df58fc966086738bfc81dda6b463e0-merged.mount: Deactivated successfully.
Jan 26 13:10:19 np0005596060 podman[260031]: 2026-01-26 18:10:19.827448966 +0000 UTC m=+1.031474761 container remove b910891101ec6a1962c39c796d92406156498976ff8e98f7d513de750e4c33d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 13:10:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:19.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:19 np0005596060 systemd[1]: libpod-conmon-b910891101ec6a1962c39c796d92406156498976ff8e98f7d513de750e4c33d0.scope: Deactivated successfully.
Jan 26 13:10:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 181 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 203 op/s
Jan 26 13:10:20 np0005596060 podman[260073]: 2026-01-26 18:10:19.981820408 +0000 UTC m=+0.025652245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:10:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:10:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:20.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:10:20 np0005596060 podman[260073]: 2026-01-26 18:10:20.584281584 +0000 UTC m=+0.628113411 container create bdb0ec193d00d48587d1adf01552ca822386f56ba5e880a5aa23a9351865c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:10:20 np0005596060 systemd[1]: Started libpod-conmon-bdb0ec193d00d48587d1adf01552ca822386f56ba5e880a5aa23a9351865c184.scope.
Jan 26 13:10:20 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:10:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd36d4a1e5d8566b4eeff019f6c930c63c70b006b9cc68b60e10c176816495fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd36d4a1e5d8566b4eeff019f6c930c63c70b006b9cc68b60e10c176816495fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd36d4a1e5d8566b4eeff019f6c930c63c70b006b9cc68b60e10c176816495fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd36d4a1e5d8566b4eeff019f6c930c63c70b006b9cc68b60e10c176816495fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd36d4a1e5d8566b4eeff019f6c930c63c70b006b9cc68b60e10c176816495fb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:20 np0005596060 podman[260073]: 2026-01-26 18:10:20.745535456 +0000 UTC m=+0.789367353 container init bdb0ec193d00d48587d1adf01552ca822386f56ba5e880a5aa23a9351865c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:10:20 np0005596060 nova_compute[247421]: 2026-01-26 18:10:20.771 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:20 np0005596060 podman[260073]: 2026-01-26 18:10:20.778864539 +0000 UTC m=+0.822696356 container start bdb0ec193d00d48587d1adf01552ca822386f56ba5e880a5aa23a9351865c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 26 13:10:20 np0005596060 podman[260073]: 2026-01-26 18:10:20.950051116 +0000 UTC m=+0.993882943 container attach bdb0ec193d00d48587d1adf01552ca822386f56ba5e880a5aa23a9351865c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:10:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 26 13:10:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 26 13:10:21 np0005596060 adoring_black[260088]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:10:21 np0005596060 adoring_black[260088]: --> relative data size: 1.0
Jan 26 13:10:21 np0005596060 adoring_black[260088]: --> All data devices are unavailable
Jan 26 13:10:21 np0005596060 systemd[1]: libpod-bdb0ec193d00d48587d1adf01552ca822386f56ba5e880a5aa23a9351865c184.scope: Deactivated successfully.
Jan 26 13:10:21 np0005596060 podman[260073]: 2026-01-26 18:10:21.66098392 +0000 UTC m=+1.704815757 container died bdb0ec193d00d48587d1adf01552ca822386f56ba5e880a5aa23a9351865c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_black, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 13:10:21 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 26 13:10:21 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:21Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:aa:5b 10.100.0.3
Jan 26 13:10:21 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:21Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:aa:5b 10.100.0.3
Jan 26 13:10:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:21.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 207 MiB data, 338 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 176 op/s
Jan 26 13:10:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-cd36d4a1e5d8566b4eeff019f6c930c63c70b006b9cc68b60e10c176816495fb-merged.mount: Deactivated successfully.
Jan 26 13:10:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:10:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:22.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:10:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 26 13:10:22 np0005596060 podman[260073]: 2026-01-26 18:10:22.743422308 +0000 UTC m=+2.787254125 container remove bdb0ec193d00d48587d1adf01552ca822386f56ba5e880a5aa23a9351865c184 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_black, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:10:22 np0005596060 systemd[1]: libpod-conmon-bdb0ec193d00d48587d1adf01552ca822386f56ba5e880a5aa23a9351865c184.scope: Deactivated successfully.
Jan 26 13:10:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 26 13:10:22 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 26 13:10:23 np0005596060 nova_compute[247421]: 2026-01-26 18:10:23.301 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:23 np0005596060 podman[260258]: 2026-01-26 18:10:23.570780278 +0000 UTC m=+0.033353995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:10:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:23.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 253 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 7.6 MiB/s wr, 213 op/s
Jan 26 13:10:24 np0005596060 podman[260258]: 2026-01-26 18:10:24.152456641 +0000 UTC m=+0.615030368 container create 502dfabffa5cd0879c14cac8fcce82ec83e03c52eb962b371bdfdd5e7cfc9bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 13:10:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:10:24 np0005596060 systemd[1]: Started libpod-conmon-502dfabffa5cd0879c14cac8fcce82ec83e03c52eb962b371bdfdd5e7cfc9bdf.scope.
Jan 26 13:10:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:10:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:24.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:24 np0005596060 podman[260258]: 2026-01-26 18:10:24.746094359 +0000 UTC m=+1.208668146 container init 502dfabffa5cd0879c14cac8fcce82ec83e03c52eb962b371bdfdd5e7cfc9bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:10:24 np0005596060 podman[260258]: 2026-01-26 18:10:24.757158153 +0000 UTC m=+1.219731890 container start 502dfabffa5cd0879c14cac8fcce82ec83e03c52eb962b371bdfdd5e7cfc9bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:10:24 np0005596060 interesting_jang[260273]: 167 167
Jan 26 13:10:24 np0005596060 systemd[1]: libpod-502dfabffa5cd0879c14cac8fcce82ec83e03c52eb962b371bdfdd5e7cfc9bdf.scope: Deactivated successfully.
Jan 26 13:10:24 np0005596060 podman[260258]: 2026-01-26 18:10:24.8684224 +0000 UTC m=+1.330996197 container attach 502dfabffa5cd0879c14cac8fcce82ec83e03c52eb962b371bdfdd5e7cfc9bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:10:24 np0005596060 podman[260258]: 2026-01-26 18:10:24.869021645 +0000 UTC m=+1.331595372 container died 502dfabffa5cd0879c14cac8fcce82ec83e03c52eb962b371bdfdd5e7cfc9bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:10:25 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2a3f834d416e193446593c2d3c1026ab31add7dffddf19351984ed9b65b70e44-merged.mount: Deactivated successfully.
Jan 26 13:10:25 np0005596060 podman[260258]: 2026-01-26 18:10:25.432286152 +0000 UTC m=+1.894859869 container remove 502dfabffa5cd0879c14cac8fcce82ec83e03c52eb962b371bdfdd5e7cfc9bdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:10:25 np0005596060 systemd[1]: libpod-conmon-502dfabffa5cd0879c14cac8fcce82ec83e03c52eb962b371bdfdd5e7cfc9bdf.scope: Deactivated successfully.
Jan 26 13:10:25 np0005596060 podman[260299]: 2026-01-26 18:10:25.651638329 +0000 UTC m=+0.084482857 container create 0b79f866c734cfbf70745aea42811582d082c1659b4cc3a360ae40cb2ab2a3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_austin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 13:10:25 np0005596060 podman[260299]: 2026-01-26 18:10:25.590159861 +0000 UTC m=+0.023004379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:10:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:10:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2939451568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:10:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:10:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2939451568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:10:25 np0005596060 systemd[1]: Started libpod-conmon-0b79f866c734cfbf70745aea42811582d082c1659b4cc3a360ae40cb2ab2a3ae.scope.
Jan 26 13:10:25 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:10:25 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac108a270d657e60525af558b7f399c65b76f8237350d242a67c106aaadbb9fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:25 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac108a270d657e60525af558b7f399c65b76f8237350d242a67c106aaadbb9fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:25 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac108a270d657e60525af558b7f399c65b76f8237350d242a67c106aaadbb9fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:25 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac108a270d657e60525af558b7f399c65b76f8237350d242a67c106aaadbb9fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:25 np0005596060 nova_compute[247421]: 2026-01-26 18:10:25.775 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:25 np0005596060 podman[260299]: 2026-01-26 18:10:25.790156519 +0000 UTC m=+0.223000997 container init 0b79f866c734cfbf70745aea42811582d082c1659b4cc3a360ae40cb2ab2a3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 13:10:25 np0005596060 podman[260299]: 2026-01-26 18:10:25.799597852 +0000 UTC m=+0.232442350 container start 0b79f866c734cfbf70745aea42811582d082c1659b4cc3a360ae40cb2ab2a3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_austin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 13:10:25 np0005596060 podman[260299]: 2026-01-26 18:10:25.817551416 +0000 UTC m=+0.250395894 container attach 0b79f866c734cfbf70745aea42811582d082c1659b4cc3a360ae40cb2ab2a3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:10:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:25.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 253 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.9 MiB/s wr, 191 op/s
Jan 26 13:10:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:26 np0005596060 musing_austin[260316]: {
Jan 26 13:10:26 np0005596060 musing_austin[260316]:    "1": [
Jan 26 13:10:26 np0005596060 musing_austin[260316]:        {
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "devices": [
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "/dev/loop3"
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            ],
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "lv_name": "ceph_lv0",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "lv_size": "7511998464",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "name": "ceph_lv0",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "tags": {
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.cluster_name": "ceph",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.crush_device_class": "",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.encrypted": "0",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.osd_id": "1",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.type": "block",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:                "ceph.vdo": "0"
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            },
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "type": "block",
Jan 26 13:10:26 np0005596060 musing_austin[260316]:            "vg_name": "ceph_vg0"
Jan 26 13:10:26 np0005596060 musing_austin[260316]:        }
Jan 26 13:10:26 np0005596060 musing_austin[260316]:    ]
Jan 26 13:10:26 np0005596060 musing_austin[260316]: }
Jan 26 13:10:26 np0005596060 systemd[1]: libpod-0b79f866c734cfbf70745aea42811582d082c1659b4cc3a360ae40cb2ab2a3ae.scope: Deactivated successfully.
Jan 26 13:10:26 np0005596060 podman[260299]: 2026-01-26 18:10:26.585206601 +0000 UTC m=+1.018051119 container died 0b79f866c734cfbf70745aea42811582d082c1659b4cc3a360ae40cb2ab2a3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_austin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:10:26 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ac108a270d657e60525af558b7f399c65b76f8237350d242a67c106aaadbb9fe-merged.mount: Deactivated successfully.
Jan 26 13:10:26 np0005596060 podman[260299]: 2026-01-26 18:10:26.643647224 +0000 UTC m=+1.076491702 container remove 0b79f866c734cfbf70745aea42811582d082c1659b4cc3a360ae40cb2ab2a3ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 26 13:10:26 np0005596060 systemd[1]: libpod-conmon-0b79f866c734cfbf70745aea42811582d082c1659b4cc3a360ae40cb2ab2a3ae.scope: Deactivated successfully.
Jan 26 13:10:27 np0005596060 podman[260478]: 2026-01-26 18:10:27.293834219 +0000 UTC m=+0.036292687 container create e208e09ef830798fe6201269d1642ce4d9fdfc6c28ee53d88e760b24d118e8d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:10:27 np0005596060 systemd[1]: Started libpod-conmon-e208e09ef830798fe6201269d1642ce4d9fdfc6c28ee53d88e760b24d118e8d8.scope.
Jan 26 13:10:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:10:27 np0005596060 podman[260478]: 2026-01-26 18:10:27.365980571 +0000 UTC m=+0.108439049 container init e208e09ef830798fe6201269d1642ce4d9fdfc6c28ee53d88e760b24d118e8d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 13:10:27 np0005596060 podman[260478]: 2026-01-26 18:10:27.372805299 +0000 UTC m=+0.115263767 container start e208e09ef830798fe6201269d1642ce4d9fdfc6c28ee53d88e760b24d118e8d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 26 13:10:27 np0005596060 podman[260478]: 2026-01-26 18:10:27.278955672 +0000 UTC m=+0.021414150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:10:27 np0005596060 podman[260478]: 2026-01-26 18:10:27.376135671 +0000 UTC m=+0.118594139 container attach e208e09ef830798fe6201269d1642ce4d9fdfc6c28ee53d88e760b24d118e8d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:10:27 np0005596060 gallant_kapitsa[260494]: 167 167
Jan 26 13:10:27 np0005596060 systemd[1]: libpod-e208e09ef830798fe6201269d1642ce4d9fdfc6c28ee53d88e760b24d118e8d8.scope: Deactivated successfully.
Jan 26 13:10:27 np0005596060 conmon[260494]: conmon e208e09ef830798fe620 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e208e09ef830798fe6201269d1642ce4d9fdfc6c28ee53d88e760b24d118e8d8.scope/container/memory.events
Jan 26 13:10:27 np0005596060 podman[260478]: 2026-01-26 18:10:27.379559856 +0000 UTC m=+0.122018314 container died e208e09ef830798fe6201269d1642ce4d9fdfc6c28ee53d88e760b24d118e8d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 13:10:27 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:27Z|00086|binding|INFO|Releasing lport 1cce36f8-18d8-4234-91c7-5c746c895e14 from this chassis (sb_readonly=0)
Jan 26 13:10:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3ef25a128be5deffbb65d7da1598b2dc8393d82ed5e33e1dff81589f7ee391b8-merged.mount: Deactivated successfully.
Jan 26 13:10:27 np0005596060 podman[260478]: 2026-01-26 18:10:27.417917513 +0000 UTC m=+0.160375981 container remove e208e09ef830798fe6201269d1642ce4d9fdfc6c28ee53d88e760b24d118e8d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kapitsa, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:10:27 np0005596060 systemd[1]: libpod-conmon-e208e09ef830798fe6201269d1642ce4d9fdfc6c28ee53d88e760b24d118e8d8.scope: Deactivated successfully.
Jan 26 13:10:27 np0005596060 nova_compute[247421]: 2026-01-26 18:10:27.464 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:27 np0005596060 podman[260517]: 2026-01-26 18:10:27.622375532 +0000 UTC m=+0.048327935 container create 57bf4f781114a05f171bdb7a8f7012461c49a10bb1e3ca79bf37267cba5db854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 13:10:27 np0005596060 systemd[1]: Started libpod-conmon-57bf4f781114a05f171bdb7a8f7012461c49a10bb1e3ca79bf37267cba5db854.scope.
Jan 26 13:10:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:10:27 np0005596060 podman[260517]: 2026-01-26 18:10:27.602690586 +0000 UTC m=+0.028643019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:10:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d7eefec20a8bec809370c876dc2580c890be46f5ba14d0e227426fed37d85c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d7eefec20a8bec809370c876dc2580c890be46f5ba14d0e227426fed37d85c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d7eefec20a8bec809370c876dc2580c890be46f5ba14d0e227426fed37d85c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d7eefec20a8bec809370c876dc2580c890be46f5ba14d0e227426fed37d85c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:10:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 26 13:10:27 np0005596060 podman[260517]: 2026-01-26 18:10:27.713801439 +0000 UTC m=+0.139753872 container init 57bf4f781114a05f171bdb7a8f7012461c49a10bb1e3ca79bf37267cba5db854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:10:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 26 13:10:27 np0005596060 podman[260517]: 2026-01-26 18:10:27.72232559 +0000 UTC m=+0.148277993 container start 57bf4f781114a05f171bdb7a8f7012461c49a10bb1e3ca79bf37267cba5db854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 13:10:27 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 26 13:10:27 np0005596060 podman[260517]: 2026-01-26 18:10:27.727071737 +0000 UTC m=+0.153024160 container attach 57bf4f781114a05f171bdb7a8f7012461c49a10bb1e3ca79bf37267cba5db854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:10:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:27.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 236 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 8.3 MiB/s wr, 311 op/s
Jan 26 13:10:28 np0005596060 nova_compute[247421]: 2026-01-26 18:10:28.305 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:28.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:28 np0005596060 practical_ptolemy[260533]: {
Jan 26 13:10:28 np0005596060 practical_ptolemy[260533]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:10:28 np0005596060 practical_ptolemy[260533]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:10:28 np0005596060 practical_ptolemy[260533]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:10:28 np0005596060 practical_ptolemy[260533]:        "osd_id": 1,
Jan 26 13:10:28 np0005596060 practical_ptolemy[260533]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:10:28 np0005596060 practical_ptolemy[260533]:        "type": "bluestore"
Jan 26 13:10:28 np0005596060 practical_ptolemy[260533]:    }
Jan 26 13:10:28 np0005596060 practical_ptolemy[260533]: }
Jan 26 13:10:28 np0005596060 systemd[1]: libpod-57bf4f781114a05f171bdb7a8f7012461c49a10bb1e3ca79bf37267cba5db854.scope: Deactivated successfully.
Jan 26 13:10:28 np0005596060 podman[260517]: 2026-01-26 18:10:28.710065149 +0000 UTC m=+1.136017592 container died 57bf4f781114a05f171bdb7a8f7012461c49a10bb1e3ca79bf37267cba5db854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:10:28 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a9d7eefec20a8bec809370c876dc2580c890be46f5ba14d0e227426fed37d85c-merged.mount: Deactivated successfully.
Jan 26 13:10:28 np0005596060 podman[260517]: 2026-01-26 18:10:28.774452749 +0000 UTC m=+1.200405152 container remove 57bf4f781114a05f171bdb7a8f7012461c49a10bb1e3ca79bf37267cba5db854 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ptolemy, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 13:10:28 np0005596060 systemd[1]: libpod-conmon-57bf4f781114a05f171bdb7a8f7012461c49a10bb1e3ca79bf37267cba5db854.scope: Deactivated successfully.
Jan 26 13:10:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:10:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:10:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:10:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:10:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 1fe5d4f5-8505-44ac-a162-9b66470d5640 does not exist
Jan 26 13:10:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 15e5e4e2-45f7-4e6c-8fcc-3a777eb396d3 does not exist
Jan 26 13:10:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 0f9757bd-509f-4f1c-aa2e-85ed6af10ab8 does not exist
Jan 26 13:10:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:10:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 26 13:10:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 26 13:10:29 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 26 13:10:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:10:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:10:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:29.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 236 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 830 KiB/s rd, 3.7 MiB/s wr, 182 op/s
Jan 26 13:10:30 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:30Z|00087|binding|INFO|Releasing lport 1cce36f8-18d8-4234-91c7-5c746c895e14 from this chassis (sb_readonly=0)
Jan 26 13:10:30 np0005596060 nova_compute[247421]: 2026-01-26 18:10:30.450 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:30.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:30 np0005596060 nova_compute[247421]: 2026-01-26 18:10:30.777 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 26 13:10:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 26 13:10:31 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 26 13:10:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:31.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 236 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.8 MiB/s wr, 265 op/s
Jan 26 13:10:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:32.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:33 np0005596060 nova_compute[247421]: 2026-01-26 18:10:33.307 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:33.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 249 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 7.5 MiB/s rd, 5.2 MiB/s wr, 146 op/s
Jan 26 13:10:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 26 13:10:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 26 13:10:34 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 26 13:10:35 np0005596060 nova_compute[247421]: 2026-01-26 18:10:35.780 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:35.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 26 13:10:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 249 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 7.2 MiB/s rd, 4.9 MiB/s wr, 140 op/s
Jan 26 13:10:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 26 13:10:36 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 26 13:10:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:36.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:37.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 279 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 6.9 MiB/s rd, 6.9 MiB/s wr, 187 op/s
Jan 26 13:10:38 np0005596060 nova_compute[247421]: 2026-01-26 18:10:38.310 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:38.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:38 np0005596060 podman[260670]: 2026-01-26 18:10:38.821359702 +0000 UTC m=+0.071067399 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:10:38 np0005596060 podman[260671]: 2026-01-26 18:10:38.913552628 +0000 UTC m=+0.153802519 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 13:10:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:39.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:10:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 26 13:10:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 279 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.8 MiB/s wr, 123 op/s
Jan 26 13:10:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:40.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 26 13:10:40 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 26 13:10:40 np0005596060 nova_compute[247421]: 2026-01-26 18:10:40.781 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 26 13:10:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 26 13:10:41 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 26 13:10:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:41.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 279 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 2.4 MiB/s wr, 65 op/s
Jan 26 13:10:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:43 np0005596060 nova_compute[247421]: 2026-01-26 18:10:43.312 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:43.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 258 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 68 KiB/s rd, 1.9 MiB/s wr, 90 op/s
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:10:44
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'vms', 'volumes', '.mgr', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:10:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:10:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:44.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:44 np0005596060 nova_compute[247421]: 2026-01-26 18:10:44.570 247428 DEBUG oslo_concurrency.lockutils [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:10:44 np0005596060 nova_compute[247421]: 2026-01-26 18:10:44.570 247428 DEBUG oslo_concurrency.lockutils [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:10:44 np0005596060 nova_compute[247421]: 2026-01-26 18:10:44.587 247428 DEBUG nova.objects.instance [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lazy-loading 'flavor' on Instance uuid 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:10:44 np0005596060 nova_compute[247421]: 2026-01-26 18:10:44.636 247428 DEBUG oslo_concurrency.lockutils [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.141 247428 DEBUG oslo_concurrency.lockutils [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.142 247428 DEBUG oslo_concurrency.lockutils [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.142 247428 INFO nova.compute.manager [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Attaching volume 32d211da-5555-44ce-8588-cdda1b258327 to /dev/vdb#033[00m
Jan 26 13:10:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.415 247428 DEBUG os_brick.utils [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.417 257571 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.431 257571 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.431 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[474ae47d-8856-43a1-925b-10465fe7ee56]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.433 257571 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.443 257571 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.443 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c4bdd3-5a89-4f82-bde2-3b17ba713740]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14cb718ec160', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.446 257571 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.455 257571 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.455 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[a27ef0a1-89bb-4ac3-8792-10764df413de]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.457 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[3b7a22a6-0837-4b28-9a73-e29f0a148e6c]: (4, 'd27b7a41-30de-40e4-9f10-b4e4f5902919') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.458 247428 DEBUG oslo_concurrency.processutils [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.486 247428 DEBUG oslo_concurrency.processutils [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.489 247428 DEBUG os_brick.initiator.connectors.lightos [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.489 247428 DEBUG os_brick.initiator.connectors.lightos [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.489 247428 DEBUG os_brick.initiator.connectors.lightos [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.490 247428 DEBUG os_brick.utils [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14cb718ec160', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'd27b7a41-30de-40e4-9f10-b4e4f5902919', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.490 247428 DEBUG nova.virt.block_device [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Updating existing volume attachment record: 3f2d8b88-1b60-4b68-853b-5222e41fe678 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 26 13:10:45 np0005596060 nova_compute[247421]: 2026-01-26 18:10:45.784 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:45.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 258 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 3.0 KiB/s wr, 44 op/s
Jan 26 13:10:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:46.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:46 np0005596060 nova_compute[247421]: 2026-01-26 18:10:46.639 247428 DEBUG nova.objects.instance [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lazy-loading 'flavor' on Instance uuid 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:10:46 np0005596060 nova_compute[247421]: 2026-01-26 18:10:46.663 247428 DEBUG nova.virt.libvirt.driver [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Attempting to attach volume 32d211da-5555-44ce-8588-cdda1b258327 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 26 13:10:46 np0005596060 nova_compute[247421]: 2026-01-26 18:10:46.670 247428 DEBUG nova.virt.libvirt.guest [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] attach device xml: <disk type="network" device="disk">
Jan 26 13:10:46 np0005596060 nova_compute[247421]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 26 13:10:46 np0005596060 nova_compute[247421]:  <source protocol="rbd" name="volumes/volume-32d211da-5555-44ce-8588-cdda1b258327">
Jan 26 13:10:46 np0005596060 nova_compute[247421]:    <host name="192.168.122.100" port="6789"/>
Jan 26 13:10:46 np0005596060 nova_compute[247421]:    <host name="192.168.122.102" port="6789"/>
Jan 26 13:10:46 np0005596060 nova_compute[247421]:    <host name="192.168.122.101" port="6789"/>
Jan 26 13:10:46 np0005596060 nova_compute[247421]:  </source>
Jan 26 13:10:46 np0005596060 nova_compute[247421]:  <auth username="openstack">
Jan 26 13:10:46 np0005596060 nova_compute[247421]:    <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:10:46 np0005596060 nova_compute[247421]:  </auth>
Jan 26 13:10:46 np0005596060 nova_compute[247421]:  <target dev="vdb" bus="virtio"/>
Jan 26 13:10:46 np0005596060 nova_compute[247421]:  <serial>32d211da-5555-44ce-8588-cdda1b258327</serial>
Jan 26 13:10:46 np0005596060 nova_compute[247421]: </disk>
Jan 26 13:10:46 np0005596060 nova_compute[247421]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 26 13:10:47 np0005596060 nova_compute[247421]: 2026-01-26 18:10:47.576 247428 DEBUG nova.virt.libvirt.driver [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:10:47 np0005596060 nova_compute[247421]: 2026-01-26 18:10:47.577 247428 DEBUG nova.virt.libvirt.driver [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:10:47 np0005596060 nova_compute[247421]: 2026-01-26 18:10:47.577 247428 DEBUG nova.virt.libvirt.driver [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:10:47 np0005596060 nova_compute[247421]: 2026-01-26 18:10:47.577 247428 DEBUG nova.virt.libvirt.driver [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] No VIF found with MAC fa:16:3e:65:aa:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:10:47 np0005596060 nova_compute[247421]: 2026-01-26 18:10:47.786 247428 DEBUG oslo_concurrency.lockutils [None req-9604d534-cde0-405c-95e2-0ca95b42e0e3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:10:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:10:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:47.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:10:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 67 KiB/s rd, 5.5 KiB/s wr, 95 op/s
Jan 26 13:10:48 np0005596060 nova_compute[247421]: 2026-01-26 18:10:48.361 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:48.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:49 np0005596060 nova_compute[247421]: 2026-01-26 18:10:49.707 247428 DEBUG oslo_concurrency.lockutils [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:10:49 np0005596060 nova_compute[247421]: 2026-01-26 18:10:49.709 247428 DEBUG oslo_concurrency.lockutils [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:10:49 np0005596060 nova_compute[247421]: 2026-01-26 18:10:49.749 247428 INFO nova.compute.manager [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Detaching volume 32d211da-5555-44ce-8588-cdda1b258327#033[00m
Jan 26 13:10:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:49.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:49 np0005596060 nova_compute[247421]: 2026-01-26 18:10:49.980 247428 INFO nova.virt.block_device [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Attempting to driver detach volume 32d211da-5555-44ce-8588-cdda1b258327 from mountpoint /dev/vdb#033[00m
Jan 26 13:10:49 np0005596060 nova_compute[247421]: 2026-01-26 18:10:49.993 247428 DEBUG nova.virt.libvirt.driver [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Attempting to detach device vdb from instance 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 26 13:10:49 np0005596060 nova_compute[247421]: 2026-01-26 18:10:49.994 247428 DEBUG nova.virt.libvirt.guest [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 26 13:10:49 np0005596060 nova_compute[247421]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 26 13:10:49 np0005596060 nova_compute[247421]:  <source protocol="rbd" name="volumes/volume-32d211da-5555-44ce-8588-cdda1b258327">
Jan 26 13:10:49 np0005596060 nova_compute[247421]:    <host name="192.168.122.100" port="6789"/>
Jan 26 13:10:49 np0005596060 nova_compute[247421]:    <host name="192.168.122.102" port="6789"/>
Jan 26 13:10:49 np0005596060 nova_compute[247421]:    <host name="192.168.122.101" port="6789"/>
Jan 26 13:10:49 np0005596060 nova_compute[247421]:  </source>
Jan 26 13:10:49 np0005596060 nova_compute[247421]:  <target dev="vdb" bus="virtio"/>
Jan 26 13:10:49 np0005596060 nova_compute[247421]:  <serial>32d211da-5555-44ce-8588-cdda1b258327</serial>
Jan 26 13:10:49 np0005596060 nova_compute[247421]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 26 13:10:49 np0005596060 nova_compute[247421]: </disk>
Jan 26 13:10:49 np0005596060 nova_compute[247421]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 26 13:10:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 4.7 KiB/s wr, 81 op/s
Jan 26 13:10:50 np0005596060 nova_compute[247421]: 2026-01-26 18:10:50.089 247428 INFO nova.virt.libvirt.driver [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Successfully detached device vdb from instance 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 from the persistent domain config.#033[00m
Jan 26 13:10:50 np0005596060 nova_compute[247421]: 2026-01-26 18:10:50.090 247428 DEBUG nova.virt.libvirt.driver [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 26 13:10:50 np0005596060 nova_compute[247421]: 2026-01-26 18:10:50.091 247428 DEBUG nova.virt.libvirt.guest [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] detach device xml: <disk type="network" device="disk">
Jan 26 13:10:50 np0005596060 nova_compute[247421]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 26 13:10:50 np0005596060 nova_compute[247421]:  <source protocol="rbd" name="volumes/volume-32d211da-5555-44ce-8588-cdda1b258327">
Jan 26 13:10:50 np0005596060 nova_compute[247421]:    <host name="192.168.122.100" port="6789"/>
Jan 26 13:10:50 np0005596060 nova_compute[247421]:    <host name="192.168.122.102" port="6789"/>
Jan 26 13:10:50 np0005596060 nova_compute[247421]:    <host name="192.168.122.101" port="6789"/>
Jan 26 13:10:50 np0005596060 nova_compute[247421]:  </source>
Jan 26 13:10:50 np0005596060 nova_compute[247421]:  <target dev="vdb" bus="virtio"/>
Jan 26 13:10:50 np0005596060 nova_compute[247421]:  <serial>32d211da-5555-44ce-8588-cdda1b258327</serial>
Jan 26 13:10:50 np0005596060 nova_compute[247421]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 26 13:10:50 np0005596060 nova_compute[247421]: </disk>
Jan 26 13:10:50 np0005596060 nova_compute[247421]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 26 13:10:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:10:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 26 13:10:50 np0005596060 nova_compute[247421]: 2026-01-26 18:10:50.176 247428 DEBUG nova.virt.libvirt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Received event <DeviceRemovedEvent: 1769451050.1762495, 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 26 13:10:50 np0005596060 nova_compute[247421]: 2026-01-26 18:10:50.178 247428 DEBUG nova.virt.libvirt.driver [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 26 13:10:50 np0005596060 nova_compute[247421]: 2026-01-26 18:10:50.181 247428 INFO nova.virt.libvirt.driver [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Successfully detached device vdb from instance 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 from the live domain config.#033[00m
Jan 26 13:10:50 np0005596060 nova_compute[247421]: 2026-01-26 18:10:50.357 247428 DEBUG nova.objects.instance [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lazy-loading 'flavor' on Instance uuid 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:10:50 np0005596060 nova_compute[247421]: 2026-01-26 18:10:50.396 247428 DEBUG oslo_concurrency.lockutils [None req-cd1e240c-8633-48c1-965c-56c96f7edea3 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:10:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 26 13:10:50 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 26 13:10:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:50.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:50 np0005596060 nova_compute[247421]: 2026-01-26 18:10:50.825 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:51.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 4.4 KiB/s wr, 74 op/s
Jan 26 13:10:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:52.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:53 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:53Z|00088|binding|INFO|Releasing lport 1cce36f8-18d8-4234-91c7-5c746c895e14 from this chassis (sb_readonly=0)
Jan 26 13:10:53 np0005596060 nova_compute[247421]: 2026-01-26 18:10:53.072 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:53 np0005596060 nova_compute[247421]: 2026-01-26 18:10:53.362 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:53.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 4.0 KiB/s wr, 41 op/s
Jan 26 13:10:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:54.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:54.652 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:10:54 np0005596060 nova_compute[247421]: 2026-01-26 18:10:54.652 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:54.654 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:10:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:10:55 np0005596060 nova_compute[247421]: 2026-01-26 18:10:55.827 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:10:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:55.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:10:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 4.0 KiB/s wr, 41 op/s
Jan 26 13:10:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:56.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:56 np0005596060 nova_compute[247421]: 2026-01-26 18:10:56.769 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:10:56 np0005596060 nova_compute[247421]: 2026-01-26 18:10:56.770 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:10:56 np0005596060 nova_compute[247421]: 2026-01-26 18:10:56.770 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:10:56 np0005596060 nova_compute[247421]: 2026-01-26 18:10:56.770 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:10:57 np0005596060 nova_compute[247421]: 2026-01-26 18:10:57.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:10:57 np0005596060 nova_compute[247421]: 2026-01-26 18:10:57.695 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:57.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 459 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.364 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:10:58.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.918 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.919 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquired lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.919 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.919 247428 DEBUG nova.objects.instance [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.989 247428 DEBUG oslo_concurrency.lockutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.990 247428 DEBUG oslo_concurrency.lockutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.990 247428 DEBUG oslo_concurrency.lockutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.990 247428 DEBUG oslo_concurrency.lockutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.990 247428 DEBUG oslo_concurrency.lockutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.991 247428 INFO nova.compute.manager [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Terminating instance#033[00m
Jan 26 13:10:58 np0005596060 nova_compute[247421]: 2026-01-26 18:10:58.992 247428 DEBUG nova.compute.manager [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:10:59 np0005596060 kernel: tap35e49e51-0b (unregistering): left promiscuous mode
Jan 26 13:10:59 np0005596060 NetworkManager[48900]: <info>  [1769451059.5373] device (tap35e49e51-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.545 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:59 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:59Z|00089|binding|INFO|Releasing lport 35e49e51-0be6-4711-8885-8e7b05fcbd88 from this chassis (sb_readonly=0)
Jan 26 13:10:59 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:59Z|00090|binding|INFO|Setting lport 35e49e51-0be6-4711-8885-8e7b05fcbd88 down in Southbound
Jan 26 13:10:59 np0005596060 ovn_controller[148842]: 2026-01-26T18:10:59Z|00091|binding|INFO|Removing iface tap35e49e51-0b ovn-installed in OVS
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.547 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.554 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:aa:5b 10.100.0.3'], port_security=['fa:16:3e:65:aa:5b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1bd1db7a-82d9-4a81-9b92-a7e83f037a99', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ad07a7cadd1f4901881fdc108d68e6a6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '93271727-8424-4c89-815a-21c72a8dd57e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e48988c5-f10f-4194-b7bb-b82637902bf0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=35e49e51-0be6-4711-8885-8e7b05fcbd88) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.556 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 35e49e51-0be6-4711-8885-8e7b05fcbd88 in datapath eb84b7b2-62ef-4f88-96f7-b0cf584d02d7 unbound from our chassis#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.557 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eb84b7b2-62ef-4f88-96f7-b0cf584d02d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.558 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f76330e4-0d3a-4008-827e-db1dc12bda1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.559 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7 namespace which is not needed anymore#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.564 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:59 np0005596060 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 26 13:10:59 np0005596060 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000008.scope: Consumed 15.604s CPU time.
Jan 26 13:10:59 np0005596060 systemd-machined[213879]: Machine qemu-6-instance-00000008 terminated.
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.628 247428 INFO nova.virt.libvirt.driver [-] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Instance destroyed successfully.#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.629 247428 DEBUG nova.objects.instance [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lazy-loading 'resources' on Instance uuid 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.643 247428 DEBUG nova.virt.libvirt.vif [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:09:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesAdminNegativeTest-server-2124185016',display_name='tempest-VolumesAdminNegativeTest-server-2124185016',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesadminnegativetest-server-2124185016',id=8,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBClLUtfR1uu2JLUDRIYDocw+9Php6VwyabQJFyw2OtGOxku7MMkPS6LmEPxQqFehHHH6Buivw8cDrVSKa2LN1KXPd5vuFSk9DDTWl1VbRGeOBSt5mEZy9zm49Isaulay8A==',key_name='tempest-keypair-467686804',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:10:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ad07a7cadd1f4901881fdc108d68e6a6',ramdisk_id='',reservation_id='r-ip2529z0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesAdminNegativeTest-2093520678',owner_user_name='tempest-VolumesAdminNegativeTest-2093520678-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:10:07Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f068f3a0c9ff42b7b9b2f9c46340f94a',uuid=1bd1db7a-82d9-4a81-9b92-a7e83f037a99,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.643 247428 DEBUG nova.network.os_vif_util [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Converting VIF {"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.644 247428 DEBUG nova.network.os_vif_util [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:aa:5b,bridge_name='br-int',has_traffic_filtering=True,id=35e49e51-0be6-4711-8885-8e7b05fcbd88,network=Network(eb84b7b2-62ef-4f88-96f7-b0cf584d02d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35e49e51-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.644 247428 DEBUG os_vif [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:aa:5b,bridge_name='br-int',has_traffic_filtering=True,id=35e49e51-0be6-4711-8885-8e7b05fcbd88,network=Network(eb84b7b2-62ef-4f88-96f7-b0cf584d02d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35e49e51-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.645 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.645 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35e49e51-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.647 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.648 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.650 247428 INFO os_vif [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:aa:5b,bridge_name='br-int',has_traffic_filtering=True,id=35e49e51-0be6-4711-8885-8e7b05fcbd88,network=Network(eb84b7b2-62ef-4f88-96f7-b0cf584d02d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35e49e51-0b')#033[00m
Jan 26 13:10:59 np0005596060 neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7[259535]: [NOTICE]   (259601) : haproxy version is 2.8.14-c23fe91
Jan 26 13:10:59 np0005596060 neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7[259535]: [NOTICE]   (259601) : path to executable is /usr/sbin/haproxy
Jan 26 13:10:59 np0005596060 neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7[259535]: [WARNING]  (259601) : Exiting Master process...
Jan 26 13:10:59 np0005596060 neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7[259535]: [ALERT]    (259601) : Current worker (259605) exited with code 143 (Terminated)
Jan 26 13:10:59 np0005596060 neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7[259535]: [WARNING]  (259601) : All workers exited. Exiting... (0)
Jan 26 13:10:59 np0005596060 systemd[1]: libpod-8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee.scope: Deactivated successfully.
Jan 26 13:10:59 np0005596060 podman[260841]: 2026-01-26 18:10:59.704412647 +0000 UTC m=+0.045284444 container died 8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:10:59 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee-userdata-shm.mount: Deactivated successfully.
Jan 26 13:10:59 np0005596060 systemd[1]: var-lib-containers-storage-overlay-94fabae5be672e17dde853b5f836b6ab66b1d26793329e37ab0ddcd53c0619cc-merged.mount: Deactivated successfully.
Jan 26 13:10:59 np0005596060 podman[260841]: 2026-01-26 18:10:59.748300645 +0000 UTC m=+0.089172442 container cleanup 8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:10:59 np0005596060 systemd[1]: libpod-conmon-8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee.scope: Deactivated successfully.
Jan 26 13:10:59 np0005596060 podman[260888]: 2026-01-26 18:10:59.851429074 +0000 UTC m=+0.076273139 container remove 8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.858 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[27a47428-ab84-42cf-8e19-a83d0c5b6d91]: (4, ('Mon Jan 26 06:10:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7 (8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee)\n8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee\nMon Jan 26 06:10:59 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7 (8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee)\n8717561ba5b394a31701e81f92f2c380eeddb9fdd8014ba47af97fdaafab93ee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.861 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[3f035c26-ab7c-4d21-91ba-df37e968582e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.862 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb84b7b2-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.865 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:59 np0005596060 kernel: tapeb84b7b2-60: left promiscuous mode
Jan 26 13:10:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:10:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:10:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:10:59.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:10:59 np0005596060 nova_compute[247421]: 2026-01-26 18:10:59.879 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.882 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a148042f-8fe3-498d-bf2a-c05f02ec0d6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.902 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[73f37926-adff-4996-acc3-5faf57803ebf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.904 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa9dd46-1a1c-43da-b773-4233a96c1546]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.921 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2c4bcc-75b8-4143-bcfe-83e64058c856]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 477021, 'reachable_time': 28981, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260904, 'error': None, 'target': 'ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:10:59 np0005596060 systemd[1]: run-netns-ovnmeta\x2deb84b7b2\x2d62ef\x2d4f88\x2d96f7\x2db0cf584d02d7.mount: Deactivated successfully.
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.926 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eb84b7b2-62ef-4f88-96f7-b0cf584d02d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:10:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:10:59.927 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[e39074f5-8f19-4941-ae56-beb42c9d5ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 459 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.258 247428 INFO nova.virt.libvirt.driver [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Deleting instance files /var/lib/nova/instances/1bd1db7a-82d9-4a81-9b92-a7e83f037a99_del#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.260 247428 INFO nova.virt.libvirt.driver [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Deletion of /var/lib/nova/instances/1bd1db7a-82d9-4a81-9b92-a7e83f037a99_del complete#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.315 247428 INFO nova.compute.manager [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Took 1.32 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.316 247428 DEBUG oslo.service.loopingcall [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.317 247428 DEBUG nova.compute.manager [-] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.317 247428 DEBUG nova.network.neutron [-] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:11:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.457 247428 DEBUG nova.compute.manager [req-51279da1-fe6a-4178-9a27-d1f2c2780d69 req-6fc18ee2-886e-4b8c-b7c8-2f2455054225 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Received event network-vif-unplugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.458 247428 DEBUG oslo_concurrency.lockutils [req-51279da1-fe6a-4178-9a27-d1f2c2780d69 req-6fc18ee2-886e-4b8c-b7c8-2f2455054225 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.458 247428 DEBUG oslo_concurrency.lockutils [req-51279da1-fe6a-4178-9a27-d1f2c2780d69 req-6fc18ee2-886e-4b8c-b7c8-2f2455054225 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.459 247428 DEBUG oslo_concurrency.lockutils [req-51279da1-fe6a-4178-9a27-d1f2c2780d69 req-6fc18ee2-886e-4b8c-b7c8-2f2455054225 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.459 247428 DEBUG nova.compute.manager [req-51279da1-fe6a-4178-9a27-d1f2c2780d69 req-6fc18ee2-886e-4b8c-b7c8-2f2455054225 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] No waiting events found dispatching network-vif-unplugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.459 247428 DEBUG nova.compute.manager [req-51279da1-fe6a-4178-9a27-d1f2c2780d69 req-6fc18ee2-886e-4b8c-b7c8-2f2455054225 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Received event network-vif-unplugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:11:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 13:11:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:00.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.598 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Updating instance_info_cache with network_info: [{"id": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "address": "fa:16:3e:65:aa:5b", "network": {"id": "eb84b7b2-62ef-4f88-96f7-b0cf584d02d7", "bridge": "br-int", "label": "tempest-VolumesAdminNegativeTest-1236286424-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ad07a7cadd1f4901881fdc108d68e6a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35e49e51-0b", "ovs_interfaceid": "35e49e51-0be6-4711-8885-8e7b05fcbd88", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.615 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.636 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Releasing lock "refresh_cache-1bd1db7a-82d9-4a81-9b92-a7e83f037a99" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.637 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.637 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.641 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.642 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.643 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:00.656 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.678 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.678 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.679 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:00 np0005596060 nova_compute[247421]: 2026-01-26 18:11:00.829 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:11:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1905683420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:11:01 np0005596060 nova_compute[247421]: 2026-01-26 18:11:01.194 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:01 np0005596060 nova_compute[247421]: 2026-01-26 18:11:01.380 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:11:01 np0005596060 nova_compute[247421]: 2026-01-26 18:11:01.382 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4834MB free_disk=20.92178726196289GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:11:01 np0005596060 nova_compute[247421]: 2026-01-26 18:11:01.382 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:01 np0005596060 nova_compute[247421]: 2026-01-26 18:11:01.383 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:01 np0005596060 nova_compute[247421]: 2026-01-26 18:11:01.459 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:11:01 np0005596060 nova_compute[247421]: 2026-01-26 18:11:01.459 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:11:01 np0005596060 nova_compute[247421]: 2026-01-26 18:11:01.460 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:11:01 np0005596060 nova_compute[247421]: 2026-01-26 18:11:01.526 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:01.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:11:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/734190327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:11:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 141 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 922 KiB/s rd, 1.9 MiB/s wr, 86 op/s
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.049 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.056 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.084 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.120 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.120 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.178 247428 DEBUG nova.network.neutron [-] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.204 247428 INFO nova.compute.manager [-] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Took 1.89 seconds to deallocate network for instance.#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.253 247428 DEBUG oslo_concurrency.lockutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.254 247428 DEBUG oslo_concurrency.lockutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.331 247428 DEBUG oslo_concurrency.processutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.551 247428 DEBUG nova.compute.manager [req-8440096d-82c3-43e9-ae3d-9ee126f18e05 req-2cd7debd-ef6a-4ece-8c55-b8a952c8e920 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Received event network-vif-deleted-35e49e51-0be6-4711-8885-8e7b05fcbd88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:11:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:02.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.658 247428 DEBUG nova.compute.manager [req-f1cec032-f160-4a70-b013-6f138103f37c req-7142812c-3283-44b2-b2ab-c42ac3f86b52 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Received event network-vif-plugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.659 247428 DEBUG oslo_concurrency.lockutils [req-f1cec032-f160-4a70-b013-6f138103f37c req-7142812c-3283-44b2-b2ab-c42ac3f86b52 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.659 247428 DEBUG oslo_concurrency.lockutils [req-f1cec032-f160-4a70-b013-6f138103f37c req-7142812c-3283-44b2-b2ab-c42ac3f86b52 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.659 247428 DEBUG oslo_concurrency.lockutils [req-f1cec032-f160-4a70-b013-6f138103f37c req-7142812c-3283-44b2-b2ab-c42ac3f86b52 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.660 247428 DEBUG nova.compute.manager [req-f1cec032-f160-4a70-b013-6f138103f37c req-7142812c-3283-44b2-b2ab-c42ac3f86b52 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] No waiting events found dispatching network-vif-plugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.660 247428 WARNING nova.compute.manager [req-f1cec032-f160-4a70-b013-6f138103f37c req-7142812c-3283-44b2-b2ab-c42ac3f86b52 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Received unexpected event network-vif-plugged-35e49e51-0be6-4711-8885-8e7b05fcbd88 for instance with vm_state deleted and task_state None.#033[00m
Jan 26 13:11:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:11:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2844415169' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.779 247428 DEBUG oslo_concurrency.processutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.786 247428 DEBUG nova.compute.provider_tree [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.802 247428 DEBUG nova.scheduler.client.report [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.829 247428 DEBUG oslo_concurrency.lockutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.864 247428 INFO nova.scheduler.client.report [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Deleted allocations for instance 1bd1db7a-82d9-4a81-9b92-a7e83f037a99#033[00m
Jan 26 13:11:02 np0005596060 nova_compute[247421]: 2026-01-26 18:11:02.930 247428 DEBUG oslo_concurrency.lockutils [None req-a53cecc1-6bac-427c-a18c-e23f8298fa31 f068f3a0c9ff42b7b9b2f9c46340f94a ad07a7cadd1f4901881fdc108d68e6a6 - - default default] Lock "1bd1db7a-82d9-4a81-9b92-a7e83f037a99" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0026490598245858922 of space, bias 1.0, pg target 0.7947179473757676 quantized to 32 (current 32)
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:11:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:11:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:03.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Jan 26 13:11:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:04.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:04 np0005596060 nova_compute[247421]: 2026-01-26 18:11:04.649 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:05 np0005596060 nova_compute[247421]: 2026-01-26 18:11:05.831 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:05.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Jan 26 13:11:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:06.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:07.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:07 np0005596060 nova_compute[247421]: 2026-01-26 18:11:07.923 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Jan 26 13:11:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:08.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:09 np0005596060 nova_compute[247421]: 2026-01-26 18:11:09.653 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:09 np0005596060 podman[261027]: 2026-01-26 18:11:09.792950688 +0000 UTC m=+0.055229173 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 13:11:09 np0005596060 podman[261028]: 2026-01-26 18:11:09.832544148 +0000 UTC m=+0.095335246 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 26 13:11:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:09.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.4 KiB/s wr, 81 op/s
Jan 26 13:11:10 np0005596060 nova_compute[247421]: 2026-01-26 18:11:10.060 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:10 np0005596060 nova_compute[247421]: 2026-01-26 18:11:10.260 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:10.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:10 np0005596060 nova_compute[247421]: 2026-01-26 18:11:10.832 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:11:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 10K writes, 42K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2691 syncs, 4.02 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2340 writes, 8242 keys, 2340 commit groups, 1.0 writes per commit group, ingest: 7.33 MB, 0.01 MB/s#012Interval WAL: 2340 writes, 917 syncs, 2.55 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 13:11:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:11.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:12 np0005596060 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 26 13:11:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 89 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 270 KiB/s wr, 99 op/s
Jan 26 13:11:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:12.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:13.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 113 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 26 13:11:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:11:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:11:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:11:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:11:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:11:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:11:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:14.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:14 np0005596060 nova_compute[247421]: 2026-01-26 18:11:14.626 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769451059.6254349, 1bd1db7a-82d9-4a81-9b92-a7e83f037a99 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:11:14 np0005596060 nova_compute[247421]: 2026-01-26 18:11:14.626 247428 INFO nova.compute.manager [-] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:11:14 np0005596060 nova_compute[247421]: 2026-01-26 18:11:14.657 247428 DEBUG nova.compute.manager [None req-b4211869-e99f-48c7-afee-cae09953c720 - - - - - -] [instance: 1bd1db7a-82d9-4a81-9b92-a7e83f037a99] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:11:14 np0005596060 nova_compute[247421]: 2026-01-26 18:11:14.658 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:14.743 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:14.743 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:14.744 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:15 np0005596060 nova_compute[247421]: 2026-01-26 18:11:15.835 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:15.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 113 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 225 KiB/s rd, 2.1 MiB/s wr, 47 op/s
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.147 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquiring lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.148 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.176 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.291 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.291 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.299 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.299 247428 INFO nova.compute.claims [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.378 247428 DEBUG nova.scheduler.client.report [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Refreshing inventories for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.413 247428 DEBUG nova.scheduler.client.report [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Updating ProviderTree inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.414 247428 DEBUG nova.compute.provider_tree [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.440 247428 DEBUG nova.scheduler.client.report [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Refreshing aggregate associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.467 247428 DEBUG nova.scheduler.client.report [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Refreshing trait associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.506 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:16.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:11:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543170152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.947 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.956 247428 DEBUG nova.compute.provider_tree [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:11:16 np0005596060 nova_compute[247421]: 2026-01-26 18:11:16.977 247428 DEBUG nova.scheduler.client.report [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.009 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.011 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.089 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.090 247428 DEBUG nova.network.neutron [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.111 247428 INFO nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.135 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.207 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.208 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.209 247428 INFO nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Creating image(s)#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.240 247428 DEBUG nova.storage.rbd_utils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] rbd image 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.267 247428 DEBUG nova.storage.rbd_utils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] rbd image 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.292 247428 DEBUG nova.storage.rbd_utils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] rbd image 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.296 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.358 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.359 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.360 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.360 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.384 247428 DEBUG nova.storage.rbd_utils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] rbd image 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.388 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.534 247428 DEBUG nova.policy [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'bb9a263bc00f40ca8042731ef5b267b8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b681bb2aa54b41b791e6f56386f44866', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.731 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.820 247428 DEBUG nova.storage.rbd_utils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] resizing rbd image 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:11:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:17.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.940 247428 DEBUG nova.objects.instance [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lazy-loading 'migration_context' on Instance uuid 269591ef-171e-4d4b-9fa0-97cd49fa40d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.991 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.991 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Ensure instance console log exists: /var/lib/nova/instances/269591ef-171e-4d4b-9fa0-97cd49fa40d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.992 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.992 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:17 np0005596060 nova_compute[247421]: 2026-01-26 18:11:17.992 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 123 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 294 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Jan 26 13:11:18 np0005596060 nova_compute[247421]: 2026-01-26 18:11:18.350 247428 DEBUG nova.network.neutron [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Successfully created port: fc39dca6-125d-4797-8ccf-b4306dac78b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:11:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:18.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:19 np0005596060 nova_compute[247421]: 2026-01-26 18:11:19.336 247428 DEBUG nova.network.neutron [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Successfully updated port: fc39dca6-125d-4797-8ccf-b4306dac78b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:11:19 np0005596060 nova_compute[247421]: 2026-01-26 18:11:19.354 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquiring lock "refresh_cache-269591ef-171e-4d4b-9fa0-97cd49fa40d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:11:19 np0005596060 nova_compute[247421]: 2026-01-26 18:11:19.354 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquired lock "refresh_cache-269591ef-171e-4d4b-9fa0-97cd49fa40d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:11:19 np0005596060 nova_compute[247421]: 2026-01-26 18:11:19.355 247428 DEBUG nova.network.neutron [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:11:19 np0005596060 nova_compute[247421]: 2026-01-26 18:11:19.470 247428 DEBUG nova.compute.manager [req-3f252f8e-087b-4c11-97da-0d5ac9f16972 req-b22e54ad-a8ca-4602-bf9f-d7ecafc64a5c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Received event network-changed-fc39dca6-125d-4797-8ccf-b4306dac78b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:11:19 np0005596060 nova_compute[247421]: 2026-01-26 18:11:19.471 247428 DEBUG nova.compute.manager [req-3f252f8e-087b-4c11-97da-0d5ac9f16972 req-b22e54ad-a8ca-4602-bf9f-d7ecafc64a5c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Refreshing instance network info cache due to event network-changed-fc39dca6-125d-4797-8ccf-b4306dac78b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:11:19 np0005596060 nova_compute[247421]: 2026-01-26 18:11:19.471 247428 DEBUG oslo_concurrency.lockutils [req-3f252f8e-087b-4c11-97da-0d5ac9f16972 req-b22e54ad-a8ca-4602-bf9f-d7ecafc64a5c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-269591ef-171e-4d4b-9fa0-97cd49fa40d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:11:19 np0005596060 nova_compute[247421]: 2026-01-26 18:11:19.557 247428 DEBUG nova.network.neutron [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:11:19 np0005596060 nova_compute[247421]: 2026-01-26 18:11:19.662 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:19.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 123 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 294 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Jan 26 13:11:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 26 13:11:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 26 13:11:20 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 26 13:11:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:20.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:20 np0005596060 nova_compute[247421]: 2026-01-26 18:11:20.836 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:20 np0005596060 nova_compute[247421]: 2026-01-26 18:11:20.854 247428 DEBUG nova.network.neutron [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Updating instance_info_cache with network_info: [{"id": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "address": "fa:16:3e:06:88:3f", "network": {"id": "7b61c336-ec02-4e23-918b-a918c4044fa8", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-671050088-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b681bb2aa54b41b791e6f56386f44866", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc39dca6-12", "ovs_interfaceid": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.120 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Releasing lock "refresh_cache-269591ef-171e-4d4b-9fa0-97cd49fa40d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.120 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Instance network_info: |[{"id": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "address": "fa:16:3e:06:88:3f", "network": {"id": "7b61c336-ec02-4e23-918b-a918c4044fa8", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-671050088-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b681bb2aa54b41b791e6f56386f44866", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc39dca6-12", "ovs_interfaceid": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.121 247428 DEBUG oslo_concurrency.lockutils [req-3f252f8e-087b-4c11-97da-0d5ac9f16972 req-b22e54ad-a8ca-4602-bf9f-d7ecafc64a5c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-269591ef-171e-4d4b-9fa0-97cd49fa40d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.121 247428 DEBUG nova.network.neutron [req-3f252f8e-087b-4c11-97da-0d5ac9f16972 req-b22e54ad-a8ca-4602-bf9f-d7ecafc64a5c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Refreshing network info cache for port fc39dca6-125d-4797-8ccf-b4306dac78b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.124 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Start _get_guest_xml network_info=[{"id": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "address": "fa:16:3e:06:88:3f", "network": {"id": "7b61c336-ec02-4e23-918b-a918c4044fa8", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-671050088-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b681bb2aa54b41b791e6f56386f44866", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc39dca6-12", "ovs_interfaceid": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.129 247428 WARNING nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.133 247428 DEBUG nova.virt.libvirt.host [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.134 247428 DEBUG nova.virt.libvirt.host [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.137 247428 DEBUG nova.virt.libvirt.host [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.138 247428 DEBUG nova.virt.libvirt.host [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.139 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.139 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.140 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.140 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.140 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.140 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.140 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.141 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.141 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.141 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.141 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.141 247428 DEBUG nova.virt.hardware [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.144 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.203 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquiring lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:11:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979469764' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.593 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.617 247428 DEBUG nova.storage.rbd_utils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] rbd image 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:11:21 np0005596060 nova_compute[247421]: 2026-01-26 18:11:21.620 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:21.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 138 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 243 KiB/s rd, 2.9 MiB/s wr, 73 op/s
Jan 26 13:11:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:11:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2204941276' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.080 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.083 247428 DEBUG nova.virt.libvirt.vif [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:11:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2043601698',display_name='tempest-ServersAdminTestJSON-server-2043601698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2043601698',id=11,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b681bb2aa54b41b791e6f56386f44866',ramdisk_id='',reservation_id='r-4co96cbf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1698802578',owner_user_name='tempest-ServersAdminTestJSON-1698802578-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:11:17Z,user_data=None,user_id='bb9a263bc00f40ca8042731ef5b267b8',uuid=269591ef-171e-4d4b-9fa0-97cd49fa40d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "address": "fa:16:3e:06:88:3f", "network": {"id": "7b61c336-ec02-4e23-918b-a918c4044fa8", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-671050088-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b681bb2aa54b41b791e6f56386f44866", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc39dca6-12", "ovs_interfaceid": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.083 247428 DEBUG nova.network.os_vif_util [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Converting VIF {"id": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "address": "fa:16:3e:06:88:3f", "network": {"id": "7b61c336-ec02-4e23-918b-a918c4044fa8", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-671050088-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b681bb2aa54b41b791e6f56386f44866", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc39dca6-12", "ovs_interfaceid": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.084 247428 DEBUG nova.network.os_vif_util [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:88:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc39dca6-125d-4797-8ccf-b4306dac78b1,network=Network(7b61c336-ec02-4e23-918b-a918c4044fa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc39dca6-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.086 247428 DEBUG nova.objects.instance [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lazy-loading 'pci_devices' on Instance uuid 269591ef-171e-4d4b-9fa0-97cd49fa40d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.105 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <uuid>269591ef-171e-4d4b-9fa0-97cd49fa40d0</uuid>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <name>instance-0000000b</name>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <nova:name>tempest-ServersAdminTestJSON-server-2043601698</nova:name>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:11:21</nova:creationTime>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <nova:user uuid="bb9a263bc00f40ca8042731ef5b267b8">tempest-ServersAdminTestJSON-1698802578-project-member</nova:user>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <nova:project uuid="b681bb2aa54b41b791e6f56386f44866">tempest-ServersAdminTestJSON-1698802578</nova:project>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <nova:port uuid="fc39dca6-125d-4797-8ccf-b4306dac78b1">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <entry name="serial">269591ef-171e-4d4b-9fa0-97cd49fa40d0</entry>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <entry name="uuid">269591ef-171e-4d4b-9fa0-97cd49fa40d0</entry>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk.config">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:06:88:3f"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <target dev="tapfc39dca6-12"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/269591ef-171e-4d4b-9fa0-97cd49fa40d0/console.log" append="off"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:11:22 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:11:22 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:11:22 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:11:22 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.107 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Preparing to wait for external event network-vif-plugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.107 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquiring lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.107 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.107 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.108 247428 DEBUG nova.virt.libvirt.vif [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:11:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2043601698',display_name='tempest-ServersAdminTestJSON-server-2043601698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2043601698',id=11,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b681bb2aa54b41b791e6f56386f44866',ramdisk_id='',reservation_id='r-4co96cbf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1698802578',owner_user_name='tempest-ServersAdminTestJSON-1698802578-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:11:17Z,user_data=None,user_id='bb9a263bc00f40ca8042731ef5b267b8',uuid=269591ef-171e-4d4b-9fa0-97cd49fa40d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "address": "fa:16:3e:06:88:3f", "network": {"id": "7b61c336-ec02-4e23-918b-a918c4044fa8", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-671050088-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b681bb2aa54b41b791e6f56386f44866", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc39dca6-12", "ovs_interfaceid": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.108 247428 DEBUG nova.network.os_vif_util [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Converting VIF {"id": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "address": "fa:16:3e:06:88:3f", "network": {"id": "7b61c336-ec02-4e23-918b-a918c4044fa8", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-671050088-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b681bb2aa54b41b791e6f56386f44866", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc39dca6-12", "ovs_interfaceid": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.109 247428 DEBUG nova.network.os_vif_util [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:88:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc39dca6-125d-4797-8ccf-b4306dac78b1,network=Network(7b61c336-ec02-4e23-918b-a918c4044fa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc39dca6-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.109 247428 DEBUG os_vif [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:88:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc39dca6-125d-4797-8ccf-b4306dac78b1,network=Network(7b61c336-ec02-4e23-918b-a918c4044fa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc39dca6-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.110 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.110 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.110 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.114 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.114 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc39dca6-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.115 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc39dca6-12, col_values=(('external_ids', {'iface-id': 'fc39dca6-125d-4797-8ccf-b4306dac78b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:88:3f', 'vm-uuid': '269591ef-171e-4d4b-9fa0-97cd49fa40d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:11:22 np0005596060 NetworkManager[48900]: <info>  [1769451082.1178] manager: (tapfc39dca6-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.117 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.120 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.125 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:22 np0005596060 nova_compute[247421]: 2026-01-26 18:11:22.126 247428 INFO os_vif [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:88:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc39dca6-125d-4797-8ccf-b4306dac78b1,network=Network(7b61c336-ec02-4e23-918b-a918c4044fa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc39dca6-12')#033[00m
Jan 26 13:11:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:22.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:23 np0005596060 nova_compute[247421]: 2026-01-26 18:11:23.640 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:11:23 np0005596060 nova_compute[247421]: 2026-01-26 18:11:23.641 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:11:23 np0005596060 nova_compute[247421]: 2026-01-26 18:11:23.641 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] No VIF found with MAC fa:16:3e:06:88:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:11:23 np0005596060 nova_compute[247421]: 2026-01-26 18:11:23.641 247428 INFO nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Using config drive#033[00m
Jan 26 13:11:23 np0005596060 nova_compute[247421]: 2026-01-26 18:11:23.662 247428 DEBUG nova.storage.rbd_utils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] rbd image 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:11:23 np0005596060 nova_compute[247421]: 2026-01-26 18:11:23.707 247428 DEBUG nova.network.neutron [req-3f252f8e-087b-4c11-97da-0d5ac9f16972 req-b22e54ad-a8ca-4602-bf9f-d7ecafc64a5c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Updated VIF entry in instance network info cache for port fc39dca6-125d-4797-8ccf-b4306dac78b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:11:23 np0005596060 nova_compute[247421]: 2026-01-26 18:11:23.708 247428 DEBUG nova.network.neutron [req-3f252f8e-087b-4c11-97da-0d5ac9f16972 req-b22e54ad-a8ca-4602-bf9f-d7ecafc64a5c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Updating instance_info_cache with network_info: [{"id": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "address": "fa:16:3e:06:88:3f", "network": {"id": "7b61c336-ec02-4e23-918b-a918c4044fa8", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-671050088-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b681bb2aa54b41b791e6f56386f44866", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc39dca6-12", "ovs_interfaceid": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:11:23 np0005596060 nova_compute[247421]: 2026-01-26 18:11:23.762 247428 DEBUG oslo_concurrency.lockutils [req-3f252f8e-087b-4c11-97da-0d5ac9f16972 req-b22e54ad-a8ca-4602-bf9f-d7ecafc64a5c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-269591ef-171e-4d4b-9fa0-97cd49fa40d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:11:23 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] Check health
Jan 26 13:11:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:23.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 134 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Jan 26 13:11:24 np0005596060 nova_compute[247421]: 2026-01-26 18:11:24.183 247428 INFO nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Creating config drive at /var/lib/nova/instances/269591ef-171e-4d4b-9fa0-97cd49fa40d0/disk.config#033[00m
Jan 26 13:11:24 np0005596060 nova_compute[247421]: 2026-01-26 18:11:24.189 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/269591ef-171e-4d4b-9fa0-97cd49fa40d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbb8f779q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:24 np0005596060 nova_compute[247421]: 2026-01-26 18:11:24.323 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/269591ef-171e-4d4b-9fa0-97cd49fa40d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbb8f779q" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.331810) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451084331941, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2113, "num_deletes": 254, "total_data_size": 3749862, "memory_usage": 3809024, "flush_reason": "Manual Compaction"}
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 26 13:11:24 np0005596060 nova_compute[247421]: 2026-01-26 18:11:24.358 247428 DEBUG nova.storage.rbd_utils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] rbd image 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:11:24 np0005596060 nova_compute[247421]: 2026-01-26 18:11:24.363 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/269591ef-171e-4d4b-9fa0-97cd49fa40d0/disk.config 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451084364568, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3672252, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22267, "largest_seqno": 24379, "table_properties": {"data_size": 3662722, "index_size": 6024, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19888, "raw_average_key_size": 20, "raw_value_size": 3643544, "raw_average_value_size": 3756, "num_data_blocks": 267, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769450884, "oldest_key_time": 1769450884, "file_creation_time": 1769451084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 32838 microseconds, and 12394 cpu microseconds.
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.364651) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3672252 bytes OK
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.364688) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.437610) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.437662) EVENT_LOG_v1 {"time_micros": 1769451084437650, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.437697) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3741189, prev total WAL file size 3741189, number of live WAL files 2.
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.438917) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3586KB)], [53(7758KB)]
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451084439065, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11616815, "oldest_snapshot_seqno": -1}
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4992 keys, 9565171 bytes, temperature: kUnknown
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451084524922, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9565171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9530209, "index_size": 21412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12485, "raw_key_size": 126248, "raw_average_key_size": 25, "raw_value_size": 9438324, "raw_average_value_size": 1890, "num_data_blocks": 877, "num_entries": 4992, "num_filter_entries": 4992, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769451084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.525239) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9565171 bytes
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.528026) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 135.2 rd, 111.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 7.6 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(5.8) write-amplify(2.6) OK, records in: 5515, records dropped: 523 output_compression: NoCompression
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.528046) EVENT_LOG_v1 {"time_micros": 1769451084528036, "job": 28, "event": "compaction_finished", "compaction_time_micros": 85924, "compaction_time_cpu_micros": 25988, "output_level": 6, "num_output_files": 1, "total_output_size": 9565171, "num_input_records": 5515, "num_output_records": 4992, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451084528918, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451084531098, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.438732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.531262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.531273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.531275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.531277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:11:24 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:11:24.531279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:11:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:24.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:24 np0005596060 nova_compute[247421]: 2026-01-26 18:11:24.959 247428 DEBUG oslo_concurrency.processutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/269591ef-171e-4d4b-9fa0-97cd49fa40d0/disk.config 269591ef-171e-4d4b-9fa0-97cd49fa40d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:24 np0005596060 nova_compute[247421]: 2026-01-26 18:11:24.960 247428 INFO nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Deleting local config drive /var/lib/nova/instances/269591ef-171e-4d4b-9fa0-97cd49fa40d0/disk.config because it was imported into RBD.#033[00m
Jan 26 13:11:25 np0005596060 kernel: tapfc39dca6-12: entered promiscuous mode
Jan 26 13:11:25 np0005596060 NetworkManager[48900]: <info>  [1769451085.0180] manager: (tapfc39dca6-12): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.018 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:25 np0005596060 ovn_controller[148842]: 2026-01-26T18:11:25Z|00092|binding|INFO|Claiming lport fc39dca6-125d-4797-8ccf-b4306dac78b1 for this chassis.
Jan 26 13:11:25 np0005596060 ovn_controller[148842]: 2026-01-26T18:11:25Z|00093|binding|INFO|fc39dca6-125d-4797-8ccf-b4306dac78b1: Claiming fa:16:3e:06:88:3f 10.100.0.3
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.021 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.026 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:25 np0005596060 systemd-udevd[261404]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:11:25 np0005596060 systemd-machined[213879]: New machine qemu-7-instance-0000000b.
Jan 26 13:11:25 np0005596060 NetworkManager[48900]: <info>  [1769451085.0606] device (tapfc39dca6-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:11:25 np0005596060 NetworkManager[48900]: <info>  [1769451085.0616] device (tapfc39dca6-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:11:25 np0005596060 systemd[1]: Started Virtual Machine qemu-7-instance-0000000b.
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.091 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:25 np0005596060 ovn_controller[148842]: 2026-01-26T18:11:25Z|00094|binding|INFO|Setting lport fc39dca6-125d-4797-8ccf-b4306dac78b1 ovn-installed in OVS
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.094 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.547 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769451085.5463827, 269591ef-171e-4d4b-9fa0-97cd49fa40d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.547 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] VM Started (Lifecycle Event)#033[00m
Jan 26 13:11:25 np0005596060 ovn_controller[148842]: 2026-01-26T18:11:25Z|00095|binding|INFO|Setting lport fc39dca6-125d-4797-8ccf-b4306dac78b1 up in Southbound
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.598 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:88:3f 10.100.0.3'], port_security=['fa:16:3e:06:88:3f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '269591ef-171e-4d4b-9fa0-97cd49fa40d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b61c336-ec02-4e23-918b-a918c4044fa8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b681bb2aa54b41b791e6f56386f44866', 'neutron:revision_number': '2', 'neutron:security_group_ids': '16f50677-548c-46d8-8091-74ea747f5d22', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8dbc3aad-d81d-4af3-af0f-f3e61321f3fc, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=fc39dca6-125d-4797-8ccf-b4306dac78b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.599 159331 INFO neutron.agent.ovn.metadata.agent [-] Port fc39dca6-125d-4797-8ccf-b4306dac78b1 in datapath 7b61c336-ec02-4e23-918b-a918c4044fa8 bound to our chassis#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.600 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b61c336-ec02-4e23-918b-a918c4044fa8#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.614 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[846ac1cc-0185-4843-85a7-1c68b414678c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.616 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b61c336-e1 in ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.618 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b61c336-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.618 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7e476e05-e2fc-4644-acd6-95bc95793189]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.619 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a3767782-75ff-4609-9730-d7d916515cdd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.633 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[f315cc25-9220-404e-9eea-618701c7f32f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.649 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[eb93d865-e4fb-4473-81b0-1e2898e2b877]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.656 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.660 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769451085.5477207, 269591ef-171e-4d4b-9fa0-97cd49fa40d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.661 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.680 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[7fe960c8-5dc9-42dd-9942-440eb52c99c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.685 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[68898bfd-5da2-4de0-a946-dd3c147a40e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 systemd-udevd[261407]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:11:25 np0005596060 NetworkManager[48900]: <info>  [1769451085.6873] manager: (tap7b61c336-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.719 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[bb80ae47-1b70-476c-8ead-b22bc633e183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.722 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[a5316ff9-c123-4491-8610-a0bbff691c2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 NetworkManager[48900]: <info>  [1769451085.7447] device (tap7b61c336-e0): carrier: link connected
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.749 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[64867fdf-a817-434e-8ff2-df6ef08ed2c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.765 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f46a03c9-ec31-42f4-9e56-46a66d439c59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b61c336-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:3d:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485032, 'reachable_time': 19356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261480, 'error': None, 'target': 'ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.777 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0e044b-f733-46c4-9f11-b4227fabb1c7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5a:3d2b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485032, 'tstamp': 485032}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261481, 'error': None, 'target': 'ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.794 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[74ccd335-d89f-44d6-ac1c-427f7be52ec4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b61c336-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5a:3d:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 28], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485032, 'reachable_time': 19356, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261483, 'error': None, 'target': 'ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.827 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[559d8fd8-d171-4077-9a7b-964d7ec0d6b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.838 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.858 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.862 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: deleting, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.888 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.893 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0eeb77d9-6b35-41ea-9b5f-0402c1cdd7bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.894 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b61c336-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.895 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.895 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b61c336-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:11:25 np0005596060 kernel: tap7b61c336-e0: entered promiscuous mode
Jan 26 13:11:25 np0005596060 NetworkManager[48900]: <info>  [1769451085.8975] manager: (tap7b61c336-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.897 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.900 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b61c336-e0, col_values=(('external_ids', {'iface-id': 'd7d00927-ab47-4516-b3f6-fa0a1a4ff527'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.901 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:25 np0005596060 ovn_controller[148842]: 2026-01-26T18:11:25Z|00096|binding|INFO|Releasing lport d7d00927-ab47-4516-b3f6-fa0a1a4ff527 from this chassis (sb_readonly=0)
Jan 26 13:11:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:25.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:25 np0005596060 nova_compute[247421]: 2026-01-26 18:11:25.916 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.918 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b61c336-ec02-4e23-918b-a918c4044fa8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b61c336-ec02-4e23-918b-a918c4044fa8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.919 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a31bd142-31eb-4be4-96d6-f6d685ab49f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.920 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-7b61c336-ec02-4e23-918b-a918c4044fa8
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/7b61c336-ec02-4e23-918b-a918c4044fa8.pid.haproxy
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 7b61c336-ec02-4e23-918b-a918c4044fa8
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:11:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:25.920 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8', 'env', 'PROCESS_TAG=haproxy-7b61c336-ec02-4e23-918b-a918c4044fa8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b61c336-ec02-4e23-918b-a918c4044fa8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:11:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 134 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Jan 26 13:11:26 np0005596060 podman[261515]: 2026-01-26 18:11:26.317032657 +0000 UTC m=+0.056781622 container create c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 13:11:26 np0005596060 systemd[1]: Started libpod-conmon-c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67.scope.
Jan 26 13:11:26 np0005596060 podman[261515]: 2026-01-26 18:11:26.286273747 +0000 UTC m=+0.026022742 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:11:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:11:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1555f784475c80952b429e45e9cadecc4c811ddc4affe92e9d0c0c2999e0ed40/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:26 np0005596060 podman[261515]: 2026-01-26 18:11:26.397680774 +0000 UTC m=+0.137429749 container init c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:11:26 np0005596060 podman[261515]: 2026-01-26 18:11:26.403271344 +0000 UTC m=+0.143020309 container start c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 13:11:26 np0005596060 neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8[261531]: [NOTICE]   (261535) : New worker (261537) forked
Jan 26 13:11:26 np0005596060 neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8[261531]: [NOTICE]   (261535) : Loading success.
Jan 26 13:11:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:26.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.852 247428 DEBUG nova.compute.manager [req-ce41a160-956c-4c34-831b-8d6b556c9196 req-28e7e49a-c464-4d09-a728-3f8a638894bc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Received event network-vif-plugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.852 247428 DEBUG oslo_concurrency.lockutils [req-ce41a160-956c-4c34-831b-8d6b556c9196 req-28e7e49a-c464-4d09-a728-3f8a638894bc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.852 247428 DEBUG oslo_concurrency.lockutils [req-ce41a160-956c-4c34-831b-8d6b556c9196 req-28e7e49a-c464-4d09-a728-3f8a638894bc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.852 247428 DEBUG oslo_concurrency.lockutils [req-ce41a160-956c-4c34-831b-8d6b556c9196 req-28e7e49a-c464-4d09-a728-3f8a638894bc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.853 247428 DEBUG nova.compute.manager [req-ce41a160-956c-4c34-831b-8d6b556c9196 req-28e7e49a-c464-4d09-a728-3f8a638894bc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Processing event network-vif-plugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.853 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.857 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769451086.8575814, 269591ef-171e-4d4b-9fa0-97cd49fa40d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.858 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.860 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.864 247428 INFO nova.virt.libvirt.driver [-] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Instance spawned successfully.#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.865 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.883 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.891 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: deleting, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.894 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.894 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.895 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.895 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.895 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.896 247428 DEBUG nova.virt.libvirt.driver [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.924 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] During sync_power_state the instance has a pending task (deleting). Skip.#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.981 247428 INFO nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Took 9.77 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:11:26 np0005596060 nova_compute[247421]: 2026-01-26 18:11:26.981 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.040 247428 DEBUG nova.compute.utils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Conflict updating instance 269591ef-171e-4d4b-9fa0-97cd49fa40d0. Expected: {'task_state': ['spawning']}. Actual: {'task_state': 'deleting'} notify_about_instance_usage /usr/lib/python3.9/site-packages/nova/compute/utils.py:430#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.042 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Instance disappeared during build. _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2483#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.042 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Unplugging VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:2976#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.042 247428 DEBUG nova.virt.libvirt.vif [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:11:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2043601698',display_name='tempest-ServersAdminTestJSON-server-2043601698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2043601698',id=11,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=2026-01-26T18:11:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b681bb2aa54b41b791e6f56386f44866',ramdisk_id='',reservation_id='r-4co96cbf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1698802578',owner_user_name='tempest-ServersAdminTestJSON-1698802578-project-member'},tags=TagList,task_state=None,terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:11:21Z,user_data=None,user_id='bb9a263bc00f40ca8042731ef5b267b8',uuid=269591ef-171e-4d4b-9fa0-97cd49fa40d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "address": "fa:16:3e:06:88:3f", "network": {"id": "7b61c336-ec02-4e23-918b-a918c4044fa8", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-671050088-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b681bb2aa54b41b791e6f56386f44866", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc39dca6-12", "ovs_interfaceid": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.043 247428 DEBUG nova.network.os_vif_util [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Converting VIF {"id": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "address": "fa:16:3e:06:88:3f", "network": {"id": "7b61c336-ec02-4e23-918b-a918c4044fa8", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-671050088-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b681bb2aa54b41b791e6f56386f44866", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc39dca6-12", "ovs_interfaceid": "fc39dca6-125d-4797-8ccf-b4306dac78b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.043 247428 DEBUG nova.network.os_vif_util [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:88:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc39dca6-125d-4797-8ccf-b4306dac78b1,network=Network(7b61c336-ec02-4e23-918b-a918c4044fa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc39dca6-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.044 247428 DEBUG os_vif [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:88:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc39dca6-125d-4797-8ccf-b4306dac78b1,network=Network(7b61c336-ec02-4e23-918b-a918c4044fa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc39dca6-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.045 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.045 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc39dca6-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.048 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:27 np0005596060 ovn_controller[148842]: 2026-01-26T18:11:27Z|00097|binding|INFO|Releasing lport fc39dca6-125d-4797-8ccf-b4306dac78b1 from this chassis (sb_readonly=0)
Jan 26 13:11:27 np0005596060 ovn_controller[148842]: 2026-01-26T18:11:27Z|00098|binding|INFO|Setting lport fc39dca6-125d-4797-8ccf-b4306dac78b1 down in Southbound
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.049 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:11:27 np0005596060 kernel: tapfc39dca6-12: left promiscuous mode
Jan 26 13:11:27 np0005596060 NetworkManager[48900]: <info>  [1769451087.0509] device (tapfc39dca6-12): state change: disconnected -> unmanaged (reason 'unmanaged-external-down', managed-type: 'external')
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.064 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.068 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:88:3f 10.100.0.3'], port_security=['fa:16:3e:06:88:3f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '269591ef-171e-4d4b-9fa0-97cd49fa40d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b61c336-ec02-4e23-918b-a918c4044fa8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b681bb2aa54b41b791e6f56386f44866', 'neutron:revision_number': '4', 'neutron:security_group_ids': '16f50677-548c-46d8-8091-74ea747f5d22', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8dbc3aad-d81d-4af3-af0f-f3e61321f3fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=fc39dca6-125d-4797-8ccf-b4306dac78b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.069 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.070 159331 INFO neutron.agent.ovn.metadata.agent [-] Port fc39dca6-125d-4797-8ccf-b4306dac78b1 in datapath 7b61c336-ec02-4e23-918b-a918c4044fa8 unbound from our chassis#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.072 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b61c336-ec02-4e23-918b-a918c4044fa8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.074 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[3012dabb-3597-46cc-afc6-7f6c749fa6be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.074 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8 namespace which is not needed anymore#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.093 247428 INFO os_vif [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:88:3f,bridge_name='br-int',has_traffic_filtering=True,id=fc39dca6-125d-4797-8ccf-b4306dac78b1,network=Network(7b61c336-ec02-4e23-918b-a918c4044fa8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc39dca6-12')#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.093 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Unplugged VIFs for instance _cleanup_allocated_networks /usr/lib/python3.9/site-packages/nova/compute/manager.py:3012#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.094 247428 DEBUG nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.094 247428 DEBUG nova.network.neutron [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:11:27 np0005596060 neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8[261531]: [NOTICE]   (261535) : haproxy version is 2.8.14-c23fe91
Jan 26 13:11:27 np0005596060 neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8[261531]: [NOTICE]   (261535) : path to executable is /usr/sbin/haproxy
Jan 26 13:11:27 np0005596060 neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8[261531]: [WARNING]  (261535) : Exiting Master process...
Jan 26 13:11:27 np0005596060 neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8[261531]: [WARNING]  (261535) : Exiting Master process...
Jan 26 13:11:27 np0005596060 neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8[261531]: [ALERT]    (261535) : Current worker (261537) exited with code 143 (Terminated)
Jan 26 13:11:27 np0005596060 neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8[261531]: [WARNING]  (261535) : All workers exited. Exiting... (0)
Jan 26 13:11:27 np0005596060 systemd[1]: libpod-c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67.scope: Deactivated successfully.
Jan 26 13:11:27 np0005596060 podman[261566]: 2026-01-26 18:11:27.208167101 +0000 UTC m=+0.046687409 container died c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 13:11:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67-userdata-shm.mount: Deactivated successfully.
Jan 26 13:11:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1555f784475c80952b429e45e9cadecc4c811ddc4affe92e9d0c0c2999e0ed40-merged.mount: Deactivated successfully.
Jan 26 13:11:27 np0005596060 podman[261566]: 2026-01-26 18:11:27.244186802 +0000 UTC m=+0.082707080 container cleanup c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 13:11:27 np0005596060 systemd[1]: libpod-conmon-c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67.scope: Deactivated successfully.
Jan 26 13:11:27 np0005596060 podman[261597]: 2026-01-26 18:11:27.342978853 +0000 UTC m=+0.077626093 container remove c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.348 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7186a64a-fecc-4665-8324-a9bbf94f7ea4]: (4, ('Mon Jan 26 06:11:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8 (c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67)\nc6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67\nMon Jan 26 06:11:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8 (c6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67)\nc6d0b1541d7e6b8a4bce4a9226583da353375f626a9a45e358d15ce90cd31f67\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.349 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[562e0e02-c65b-4968-88d4-cf190b887aaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.350 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b61c336-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.352 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:27 np0005596060 kernel: tap7b61c336-e0: left promiscuous mode
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.355 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.357 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e091cd70-7a22-46ea-9112-4d93a823389d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.371 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e9613246-eaf0-4c76-aa4e-7959646f9973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.372 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[305582e2-0ed8-47cd-a3af-cd30cc6d5978]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:27 np0005596060 nova_compute[247421]: 2026-01-26 18:11:27.374 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.389 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[6763ef4e-b73a-480d-a6b5-38cbea186bba]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485025, 'reachable_time': 38624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261612, 'error': None, 'target': 'ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:27 np0005596060 systemd[1]: run-netns-ovnmeta\x2d7b61c336\x2dec02\x2d4e23\x2d918b\x2da918c4044fa8.mount: Deactivated successfully.
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.396 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b61c336-ec02-4e23-918b-a918c4044fa8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:11:27 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:27.396 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[661dc9b2-4d45-4169-a582-1ac23beebdac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:11:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:27.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 157 op/s
Jan 26 13:11:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:28.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:29.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.0 MiB/s wr, 157 op/s
Jan 26 13:11:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.488 247428 DEBUG nova.compute.manager [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Received event network-vif-plugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.489 247428 DEBUG oslo_concurrency.lockutils [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.489 247428 DEBUG oslo_concurrency.lockutils [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.490 247428 DEBUG oslo_concurrency.lockutils [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.490 247428 DEBUG nova.compute.manager [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] No waiting events found dispatching network-vif-plugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.490 247428 WARNING nova.compute.manager [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Received unexpected event network-vif-plugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 for instance with vm_state building and task_state deleting.#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.490 247428 DEBUG nova.compute.manager [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Received event network-vif-unplugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.490 247428 DEBUG oslo_concurrency.lockutils [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.491 247428 DEBUG oslo_concurrency.lockutils [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.491 247428 DEBUG oslo_concurrency.lockutils [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.491 247428 DEBUG nova.compute.manager [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] No waiting events found dispatching network-vif-unplugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.491 247428 DEBUG nova.compute.manager [req-358d9f01-ec39-4514-b10d-7c9621584af9 req-a6f0eab8-62a3-4038-8439-aab2debf51cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Received event network-vif-unplugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:11:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.541 247428 DEBUG nova.network.neutron [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.561 247428 INFO nova.compute.manager [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Took 3.47 seconds to deallocate network for instance.#033[00m
Jan 26 13:11:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:30.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.662 247428 DEBUG nova.compute.manager [req-d0a94cf0-6fee-4ef1-b6e9-23b846dcd425 req-29d11f74-2401-4f83-839a-13db72e4c7c6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Received event network-vif-deleted-fc39dca6-125d-4797-8ccf-b4306dac78b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.664 247428 INFO nova.scheduler.client.report [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Deleted allocations for instance 269591ef-171e-4d4b-9fa0-97cd49fa40d0#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.665 247428 DEBUG oslo_concurrency.lockutils [None req-5b226f00-c3c0-4adf-ae63-9015d5fdad56 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.666 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 9.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.666 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquiring lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.667 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.667 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.669 247428 INFO nova.compute.manager [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Terminating instance#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.670 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquiring lock "refresh_cache-269591ef-171e-4d4b-9fa0-97cd49fa40d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.670 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquired lock "refresh_cache-269591ef-171e-4d4b-9fa0-97cd49fa40d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.671 247428 DEBUG nova.network.neutron [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:11:30 np0005596060 podman[261937]: 2026-01-26 18:11:30.614123076 +0000 UTC m=+0.022878453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.841 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:30 np0005596060 podman[261937]: 2026-01-26 18:11:30.871036504 +0000 UTC m=+0.279791851 container create 81c604aaf99242e9991ac3255b158a267b5a6b011180149f93ceb9bd76e65ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 13:11:30 np0005596060 nova_compute[247421]: 2026-01-26 18:11:30.871 247428 DEBUG nova.network.neutron [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:11:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:11:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:11:31 np0005596060 systemd[1]: Started libpod-conmon-81c604aaf99242e9991ac3255b158a267b5a6b011180149f93ceb9bd76e65ad6.scope.
Jan 26 13:11:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:11:31 np0005596060 nova_compute[247421]: 2026-01-26 18:11:31.496 247428 DEBUG nova.network.neutron [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:11:31 np0005596060 nova_compute[247421]: 2026-01-26 18:11:31.510 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Releasing lock "refresh_cache-269591ef-171e-4d4b-9fa0-97cd49fa40d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:11:31 np0005596060 nova_compute[247421]: 2026-01-26 18:11:31.512 247428 DEBUG nova.compute.manager [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:11:31 np0005596060 podman[261937]: 2026-01-26 18:11:31.514907551 +0000 UTC m=+0.923662878 container init 81c604aaf99242e9991ac3255b158a267b5a6b011180149f93ceb9bd76e65ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_buck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 13:11:31 np0005596060 podman[261937]: 2026-01-26 18:11:31.524341457 +0000 UTC m=+0.933096754 container start 81c604aaf99242e9991ac3255b158a267b5a6b011180149f93ceb9bd76e65ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 13:11:31 np0005596060 intelligent_buck[261952]: 167 167
Jan 26 13:11:31 np0005596060 systemd[1]: libpod-81c604aaf99242e9991ac3255b158a267b5a6b011180149f93ceb9bd76e65ad6.scope: Deactivated successfully.
Jan 26 13:11:31 np0005596060 conmon[261952]: conmon 81c604aaf99242e9991a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-81c604aaf99242e9991ac3255b158a267b5a6b011180149f93ceb9bd76e65ad6.scope/container/memory.events
Jan 26 13:11:31 np0005596060 podman[261937]: 2026-01-26 18:11:31.666071503 +0000 UTC m=+1.074826830 container attach 81c604aaf99242e9991ac3255b158a267b5a6b011180149f93ceb9bd76e65ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_buck, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:11:31 np0005596060 podman[261937]: 2026-01-26 18:11:31.667863978 +0000 UTC m=+1.076619315 container died 81c604aaf99242e9991ac3255b158a267b5a6b011180149f93ceb9bd76e65ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_buck, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 13:11:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:31.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:32 np0005596060 nova_compute[247421]: 2026-01-26 18:11:32.048 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.7 MiB/s wr, 148 op/s
Jan 26 13:11:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8386d0e648fa59e4a46be7c0798f6636ffef93a7bb8c6b8fc95f010bb842afa8-merged.mount: Deactivated successfully.
Jan 26 13:11:32 np0005596060 nova_compute[247421]: 2026-01-26 18:11:32.602 247428 DEBUG nova.compute.manager [req-2c5b5806-db71-4136-ac24-971261337c0e req-9a44a02d-3065-438d-aee9-be0d83b1f59a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Received event network-vif-plugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:11:32 np0005596060 nova_compute[247421]: 2026-01-26 18:11:32.602 247428 DEBUG oslo_concurrency.lockutils [req-2c5b5806-db71-4136-ac24-971261337c0e req-9a44a02d-3065-438d-aee9-be0d83b1f59a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:32 np0005596060 nova_compute[247421]: 2026-01-26 18:11:32.602 247428 DEBUG oslo_concurrency.lockutils [req-2c5b5806-db71-4136-ac24-971261337c0e req-9a44a02d-3065-438d-aee9-be0d83b1f59a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:32 np0005596060 nova_compute[247421]: 2026-01-26 18:11:32.602 247428 DEBUG oslo_concurrency.lockutils [req-2c5b5806-db71-4136-ac24-971261337c0e req-9a44a02d-3065-438d-aee9-be0d83b1f59a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:32 np0005596060 nova_compute[247421]: 2026-01-26 18:11:32.603 247428 DEBUG nova.compute.manager [req-2c5b5806-db71-4136-ac24-971261337c0e req-9a44a02d-3065-438d-aee9-be0d83b1f59a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] No waiting events found dispatching network-vif-plugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:11:32 np0005596060 nova_compute[247421]: 2026-01-26 18:11:32.603 247428 WARNING nova.compute.manager [req-2c5b5806-db71-4136-ac24-971261337c0e req-9a44a02d-3065-438d-aee9-be0d83b1f59a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Received unexpected event network-vif-plugged-fc39dca6-125d-4797-8ccf-b4306dac78b1 for instance with vm_state active and task_state None.#033[00m
Jan 26 13:11:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:32 np0005596060 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 26 13:11:32 np0005596060 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d0000000b.scope: Consumed 5.365s CPU time.
Jan 26 13:11:32 np0005596060 systemd-machined[213879]: Machine qemu-7-instance-0000000b terminated.
Jan 26 13:11:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:33 np0005596060 nova_compute[247421]: 2026-01-26 18:11:33.158 247428 INFO nova.virt.libvirt.driver [-] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Instance destroyed successfully.#033[00m
Jan 26 13:11:33 np0005596060 nova_compute[247421]: 2026-01-26 18:11:33.159 247428 DEBUG nova.objects.instance [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lazy-loading 'resources' on Instance uuid 269591ef-171e-4d4b-9fa0-97cd49fa40d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:11:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:11:33 np0005596060 podman[261937]: 2026-01-26 18:11:33.257522965 +0000 UTC m=+2.666278282 container remove 81c604aaf99242e9991ac3255b158a267b5a6b011180149f93ceb9bd76e65ad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_buck, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:11:33 np0005596060 systemd[1]: libpod-conmon-81c604aaf99242e9991ac3255b158a267b5a6b011180149f93ceb9bd76e65ad6.scope: Deactivated successfully.
Jan 26 13:11:33 np0005596060 podman[261990]: 2026-01-26 18:11:33.414931803 +0000 UTC m=+0.025959131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:11:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:33 np0005596060 podman[261990]: 2026-01-26 18:11:33.756452486 +0000 UTC m=+0.367479794 container create d46c687547b5167025ba9e764ec302df5554f892d81dfb47f56a5ec5712abfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shirley, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 13:11:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:33.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 180 op/s
Jan 26 13:11:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:11:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:34.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:34 np0005596060 systemd[1]: Started libpod-conmon-d46c687547b5167025ba9e764ec302df5554f892d81dfb47f56a5ec5712abfee.scope.
Jan 26 13:11:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:34 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:11:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82ace968135899e289d9f9093b3ef4944a08fed29d84296ad5935bf82b6d7f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82ace968135899e289d9f9093b3ef4944a08fed29d84296ad5935bf82b6d7f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82ace968135899e289d9f9093b3ef4944a08fed29d84296ad5935bf82b6d7f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82ace968135899e289d9f9093b3ef4944a08fed29d84296ad5935bf82b6d7f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:11:34 np0005596060 podman[261990]: 2026-01-26 18:11:34.914120258 +0000 UTC m=+1.525147586 container init d46c687547b5167025ba9e764ec302df5554f892d81dfb47f56a5ec5712abfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shirley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 13:11:34 np0005596060 podman[261990]: 2026-01-26 18:11:34.92259904 +0000 UTC m=+1.533626348 container start d46c687547b5167025ba9e764ec302df5554f892d81dfb47f56a5ec5712abfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shirley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 13:11:34 np0005596060 podman[261990]: 2026-01-26 18:11:34.943061642 +0000 UTC m=+1.554088950 container attach d46c687547b5167025ba9e764ec302df5554f892d81dfb47f56a5ec5712abfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:11:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:35 np0005596060 nova_compute[247421]: 2026-01-26 18:11:35.809 247428 INFO nova.virt.libvirt.driver [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Deleting instance files /var/lib/nova/instances/269591ef-171e-4d4b-9fa0-97cd49fa40d0_del#033[00m
Jan 26 13:11:35 np0005596060 nova_compute[247421]: 2026-01-26 18:11:35.811 247428 INFO nova.virt.libvirt.driver [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Deletion of /var/lib/nova/instances/269591ef-171e-4d4b-9fa0-97cd49fa40d0_del complete#033[00m
Jan 26 13:11:35 np0005596060 nova_compute[247421]: 2026-01-26 18:11:35.847 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:35.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:35 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:35 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:35 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:35 np0005596060 nova_compute[247421]: 2026-01-26 18:11:35.997 247428 INFO nova.compute.manager [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Took 4.48 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:11:36 np0005596060 nova_compute[247421]: 2026-01-26 18:11:36.000 247428 DEBUG oslo.service.loopingcall [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:11:36 np0005596060 nova_compute[247421]: 2026-01-26 18:11:36.000 247428 DEBUG nova.compute.manager [-] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:11:36 np0005596060 nova_compute[247421]: 2026-01-26 18:11:36.001 247428 DEBUG nova.network.neutron [-] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:11:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 167 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 138 op/s
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:36 np0005596060 boring_shirley[262025]: [
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:    {
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:        "available": false,
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:        "ceph_device": false,
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:        "lsm_data": {},
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:        "lvs": [],
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:        "path": "/dev/sr0",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:        "rejected_reasons": [
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "Insufficient space (<5GB)",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "Has a FileSystem"
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:        ],
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:        "sys_api": {
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "actuators": null,
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "device_nodes": "sr0",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "devname": "sr0",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "human_readable_size": "482.00 KB",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "id_bus": "ata",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "model": "QEMU DVD-ROM",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "nr_requests": "2",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "parent": "/dev/sr0",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "partitions": {},
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "path": "/dev/sr0",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "removable": "1",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "rev": "2.5+",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "ro": "0",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "rotational": "1",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "sas_address": "",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "sas_device_handle": "",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "scheduler_mode": "mq-deadline",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "sectors": 0,
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "sectorsize": "2048",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "size": 493568.0,
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "support_discard": "2048",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "type": "disk",
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:            "vendor": "QEMU"
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:        }
Jan 26 13:11:36 np0005596060 boring_shirley[262025]:    }
Jan 26 13:11:36 np0005596060 boring_shirley[262025]: ]
Jan 26 13:11:36 np0005596060 systemd[1]: libpod-d46c687547b5167025ba9e764ec302df5554f892d81dfb47f56a5ec5712abfee.scope: Deactivated successfully.
Jan 26 13:11:36 np0005596060 systemd[1]: libpod-d46c687547b5167025ba9e764ec302df5554f892d81dfb47f56a5ec5712abfee.scope: Consumed 1.359s CPU time.
Jan 26 13:11:36 np0005596060 podman[263309]: 2026-01-26 18:11:36.35840824 +0000 UTC m=+0.024893024 container died d46c687547b5167025ba9e764ec302df5554f892d81dfb47f56a5ec5712abfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:11:36 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b82ace968135899e289d9f9093b3ef4944a08fed29d84296ad5935bf82b6d7f6-merged.mount: Deactivated successfully.
Jan 26 13:11:36 np0005596060 podman[263309]: 2026-01-26 18:11:36.409001895 +0000 UTC m=+0.075486649 container remove d46c687547b5167025ba9e764ec302df5554f892d81dfb47f56a5ec5712abfee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 13:11:36 np0005596060 systemd[1]: libpod-conmon-d46c687547b5167025ba9e764ec302df5554f892d81dfb47f56a5ec5712abfee.scope: Deactivated successfully.
Jan 26 13:11:36 np0005596060 nova_compute[247421]: 2026-01-26 18:11:36.431 247428 DEBUG nova.network.neutron [-] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:36 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e47f9d70-4123-410a-b9ee-bcbdf64a32eb does not exist
Jan 26 13:11:36 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b7ecca80-32f5-4176-95a3-51da296c926b does not exist
Jan 26 13:11:36 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ead8acce-911a-4014-8976-d645a8d683c0 does not exist
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:11:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:36.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 26 13:11:37 np0005596060 nova_compute[247421]: 2026-01-26 18:11:37.052 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:37 np0005596060 nova_compute[247421]: 2026-01-26 18:11:37.057 247428 DEBUG nova.network.neutron [-] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:11:37 np0005596060 nova_compute[247421]: 2026-01-26 18:11:37.085 247428 INFO nova.compute.manager [-] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Took 1.08 seconds to deallocate network for instance.#033[00m
Jan 26 13:11:37 np0005596060 podman[263464]: 2026-01-26 18:11:37.049892978 +0000 UTC m=+0.024597287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:11:37 np0005596060 nova_compute[247421]: 2026-01-26 18:11:37.296 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:11:37 np0005596060 nova_compute[247421]: 2026-01-26 18:11:37.296 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:11:37 np0005596060 nova_compute[247421]: 2026-01-26 18:11:37.325 247428 DEBUG oslo_concurrency.processutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:11:37 np0005596060 podman[263464]: 2026-01-26 18:11:37.525463175 +0000 UTC m=+0.500167434 container create 66578972bda52056f080220fd3df56449c6c3cd7f6709ea98cf87053af7c7063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 13:11:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 26 13:11:37 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 26 13:11:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:37.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:11:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/999847430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:11:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 122 op/s
Jan 26 13:11:38 np0005596060 nova_compute[247421]: 2026-01-26 18:11:38.072 247428 DEBUG oslo_concurrency.processutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:11:38 np0005596060 nova_compute[247421]: 2026-01-26 18:11:38.079 247428 DEBUG nova.compute.provider_tree [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:11:38 np0005596060 systemd[1]: Started libpod-conmon-66578972bda52056f080220fd3df56449c6c3cd7f6709ea98cf87053af7c7063.scope.
Jan 26 13:11:38 np0005596060 nova_compute[247421]: 2026-01-26 18:11:38.147 247428 DEBUG nova.scheduler.client.report [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:11:38 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:11:38 np0005596060 nova_compute[247421]: 2026-01-26 18:11:38.179 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:38 np0005596060 nova_compute[247421]: 2026-01-26 18:11:38.280 247428 DEBUG oslo_concurrency.lockutils [None req-0a843e40-b195-4fa8-92b2-d46e229bbef6 bb9a263bc00f40ca8042731ef5b267b8 b681bb2aa54b41b791e6f56386f44866 - - default default] Lock "269591ef-171e-4d4b-9fa0-97cd49fa40d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:11:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:11:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:11:38 np0005596060 podman[263464]: 2026-01-26 18:11:38.361782297 +0000 UTC m=+1.336486586 container init 66578972bda52056f080220fd3df56449c6c3cd7f6709ea98cf87053af7c7063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 13:11:38 np0005596060 podman[263464]: 2026-01-26 18:11:38.373028308 +0000 UTC m=+1.347732567 container start 66578972bda52056f080220fd3df56449c6c3cd7f6709ea98cf87053af7c7063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:11:38 np0005596060 systemd[1]: libpod-66578972bda52056f080220fd3df56449c6c3cd7f6709ea98cf87053af7c7063.scope: Deactivated successfully.
Jan 26 13:11:38 np0005596060 eager_brahmagupta[263503]: 167 167
Jan 26 13:11:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:38.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:38 np0005596060 podman[263464]: 2026-01-26 18:11:38.660559171 +0000 UTC m=+1.635263470 container attach 66578972bda52056f080220fd3df56449c6c3cd7f6709ea98cf87053af7c7063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:11:38 np0005596060 podman[263464]: 2026-01-26 18:11:38.661773211 +0000 UTC m=+1.636477480 container died 66578972bda52056f080220fd3df56449c6c3cd7f6709ea98cf87053af7c7063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:11:39 np0005596060 systemd[1]: var-lib-containers-storage-overlay-67d1e982e33a5918f1f9e21225ecc14f7dd1ebe0c13745bf6fa9e9c879092c4d-merged.mount: Deactivated successfully.
Jan 26 13:11:39 np0005596060 nova_compute[247421]: 2026-01-26 18:11:39.325 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:39.326 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:11:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:39.328 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:11:39 np0005596060 podman[263464]: 2026-01-26 18:11:39.771016071 +0000 UTC m=+2.745720330 container remove 66578972bda52056f080220fd3df56449c6c3cd7f6709ea98cf87053af7c7063 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_brahmagupta, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 26 13:11:39 np0005596060 systemd[1]: libpod-conmon-66578972bda52056f080220fd3df56449c6c3cd7f6709ea98cf87053af7c7063.scope: Deactivated successfully.
Jan 26 13:11:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:39.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:39 np0005596060 podman[263523]: 2026-01-26 18:11:39.925308621 +0000 UTC m=+0.095055959 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 13:11:40 np0005596060 podman[263547]: 2026-01-26 18:11:39.93842999 +0000 UTC m=+0.028714650 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:11:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 121 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.8 KiB/s wr, 122 op/s
Jan 26 13:11:40 np0005596060 podman[263547]: 2026-01-26 18:11:40.083087678 +0000 UTC m=+0.173372328 container create 54dfed5507e03fd6596aa1870eee3a29c633a396a6fd83a2da69107c64cf6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wozniak, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:11:40 np0005596060 podman[263549]: 2026-01-26 18:11:40.120796282 +0000 UTC m=+0.196143668 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 13:11:40 np0005596060 systemd[1]: Started libpod-conmon-54dfed5507e03fd6596aa1870eee3a29c633a396a6fd83a2da69107c64cf6d76.scope.
Jan 26 13:11:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:11:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e935a86709d32efa3284ffbe3361012ebf8fda4061a14fd6a1691ac73c2559c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e935a86709d32efa3284ffbe3361012ebf8fda4061a14fd6a1691ac73c2559c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e935a86709d32efa3284ffbe3361012ebf8fda4061a14fd6a1691ac73c2559c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e935a86709d32efa3284ffbe3361012ebf8fda4061a14fd6a1691ac73c2559c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e935a86709d32efa3284ffbe3361012ebf8fda4061a14fd6a1691ac73c2559c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:40 np0005596060 podman[263547]: 2026-01-26 18:11:40.196611418 +0000 UTC m=+0.286896078 container init 54dfed5507e03fd6596aa1870eee3a29c633a396a6fd83a2da69107c64cf6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 13:11:40 np0005596060 podman[263547]: 2026-01-26 18:11:40.211790738 +0000 UTC m=+0.302075368 container start 54dfed5507e03fd6596aa1870eee3a29c633a396a6fd83a2da69107c64cf6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wozniak, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:11:40 np0005596060 podman[263547]: 2026-01-26 18:11:40.220693871 +0000 UTC m=+0.310978621 container attach 54dfed5507e03fd6596aa1870eee3a29c633a396a6fd83a2da69107c64cf6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:11:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:11:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4098300448' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:11:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:11:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4098300448' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:11:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:40.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:40 np0005596060 nova_compute[247421]: 2026-01-26 18:11:40.849 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:41 np0005596060 nifty_wozniak[263587]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:11:41 np0005596060 nifty_wozniak[263587]: --> relative data size: 1.0
Jan 26 13:11:41 np0005596060 nifty_wozniak[263587]: --> All data devices are unavailable
Jan 26 13:11:41 np0005596060 systemd[1]: libpod-54dfed5507e03fd6596aa1870eee3a29c633a396a6fd83a2da69107c64cf6d76.scope: Deactivated successfully.
Jan 26 13:11:41 np0005596060 podman[263547]: 2026-01-26 18:11:41.066342965 +0000 UTC m=+1.156627615 container died 54dfed5507e03fd6596aa1870eee3a29c633a396a6fd83a2da69107c64cf6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wozniak, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:11:41 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2e935a86709d32efa3284ffbe3361012ebf8fda4061a14fd6a1691ac73c2559c-merged.mount: Deactivated successfully.
Jan 26 13:11:41 np0005596060 podman[263547]: 2026-01-26 18:11:41.12526746 +0000 UTC m=+1.215552090 container remove 54dfed5507e03fd6596aa1870eee3a29c633a396a6fd83a2da69107c64cf6d76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_wozniak, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:11:41 np0005596060 systemd[1]: libpod-conmon-54dfed5507e03fd6596aa1870eee3a29c633a396a6fd83a2da69107c64cf6d76.scope: Deactivated successfully.
Jan 26 13:11:41 np0005596060 podman[263753]: 2026-01-26 18:11:41.774369898 +0000 UTC m=+0.038820282 container create 5ac405b813b18964a111e424296c8eccf6a972e26ccf07b88c7e6f155fe682c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 13:11:41 np0005596060 systemd[1]: Started libpod-conmon-5ac405b813b18964a111e424296c8eccf6a972e26ccf07b88c7e6f155fe682c6.scope.
Jan 26 13:11:41 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:11:41 np0005596060 podman[263753]: 2026-01-26 18:11:41.849014656 +0000 UTC m=+0.113465060 container init 5ac405b813b18964a111e424296c8eccf6a972e26ccf07b88c7e6f155fe682c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kowalevski, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 13:11:41 np0005596060 podman[263753]: 2026-01-26 18:11:41.757360833 +0000 UTC m=+0.021811247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:11:41 np0005596060 podman[263753]: 2026-01-26 18:11:41.855639411 +0000 UTC m=+0.120089785 container start 5ac405b813b18964a111e424296c8eccf6a972e26ccf07b88c7e6f155fe682c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 26 13:11:41 np0005596060 podman[263753]: 2026-01-26 18:11:41.859362164 +0000 UTC m=+0.123812568 container attach 5ac405b813b18964a111e424296c8eccf6a972e26ccf07b88c7e6f155fe682c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:11:41 np0005596060 bold_kowalevski[263770]: 167 167
Jan 26 13:11:41 np0005596060 systemd[1]: libpod-5ac405b813b18964a111e424296c8eccf6a972e26ccf07b88c7e6f155fe682c6.scope: Deactivated successfully.
Jan 26 13:11:41 np0005596060 podman[263753]: 2026-01-26 18:11:41.862815861 +0000 UTC m=+0.127266275 container died 5ac405b813b18964a111e424296c8eccf6a972e26ccf07b88c7e6f155fe682c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:11:41 np0005596060 systemd[1]: var-lib-containers-storage-overlay-df9ed80b6cc8e5ec67dd55ada425127dec2bb54ad573959f14376dae5a1750ca-merged.mount: Deactivated successfully.
Jan 26 13:11:41 np0005596060 podman[263753]: 2026-01-26 18:11:41.905346015 +0000 UTC m=+0.169796409 container remove 5ac405b813b18964a111e424296c8eccf6a972e26ccf07b88c7e6f155fe682c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:11:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:41 np0005596060 systemd[1]: libpod-conmon-5ac405b813b18964a111e424296c8eccf6a972e26ccf07b88c7e6f155fe682c6.scope: Deactivated successfully.
Jan 26 13:11:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:41.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:42 np0005596060 nova_compute[247421]: 2026-01-26 18:11:42.053 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 KiB/s wr, 104 op/s
Jan 26 13:11:42 np0005596060 podman[263794]: 2026-01-26 18:11:42.072387164 +0000 UTC m=+0.044786492 container create 45af1fda85bf919e9c1024ad239b87e57486b81a912e2ff4bd3e4c5501efbd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 13:11:42 np0005596060 systemd[1]: Started libpod-conmon-45af1fda85bf919e9c1024ad239b87e57486b81a912e2ff4bd3e4c5501efbd54.scope.
Jan 26 13:11:42 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:11:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f81b304c1d5b368583a7aa4a6444d2538f924e0aa44eb9678c57d165ce4047/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f81b304c1d5b368583a7aa4a6444d2538f924e0aa44eb9678c57d165ce4047/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f81b304c1d5b368583a7aa4a6444d2538f924e0aa44eb9678c57d165ce4047/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f81b304c1d5b368583a7aa4a6444d2538f924e0aa44eb9678c57d165ce4047/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:42 np0005596060 podman[263794]: 2026-01-26 18:11:42.051551632 +0000 UTC m=+0.023950980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:11:42 np0005596060 podman[263794]: 2026-01-26 18:11:42.146126568 +0000 UTC m=+0.118525916 container init 45af1fda85bf919e9c1024ad239b87e57486b81a912e2ff4bd3e4c5501efbd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:11:42 np0005596060 podman[263794]: 2026-01-26 18:11:42.154412136 +0000 UTC m=+0.126811464 container start 45af1fda85bf919e9c1024ad239b87e57486b81a912e2ff4bd3e4c5501efbd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:11:42 np0005596060 podman[263794]: 2026-01-26 18:11:42.157416311 +0000 UTC m=+0.129815639 container attach 45af1fda85bf919e9c1024ad239b87e57486b81a912e2ff4bd3e4c5501efbd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 13:11:42 np0005596060 nova_compute[247421]: 2026-01-26 18:11:42.247 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:42.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:42 np0005596060 angry_shannon[263810]: {
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:    "1": [
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:        {
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "devices": [
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "/dev/loop3"
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            ],
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "lv_name": "ceph_lv0",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "lv_size": "7511998464",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "name": "ceph_lv0",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "tags": {
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.cluster_name": "ceph",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.crush_device_class": "",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.encrypted": "0",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.osd_id": "1",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.type": "block",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:                "ceph.vdo": "0"
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            },
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "type": "block",
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:            "vg_name": "ceph_vg0"
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:        }
Jan 26 13:11:42 np0005596060 angry_shannon[263810]:    ]
Jan 26 13:11:42 np0005596060 angry_shannon[263810]: }
Jan 26 13:11:42 np0005596060 systemd[1]: libpod-45af1fda85bf919e9c1024ad239b87e57486b81a912e2ff4bd3e4c5501efbd54.scope: Deactivated successfully.
Jan 26 13:11:42 np0005596060 podman[263794]: 2026-01-26 18:11:42.902693215 +0000 UTC m=+0.875092563 container died 45af1fda85bf919e9c1024ad239b87e57486b81a912e2ff4bd3e4c5501efbd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:11:42 np0005596060 systemd[1]: var-lib-containers-storage-overlay-25f81b304c1d5b368583a7aa4a6444d2538f924e0aa44eb9678c57d165ce4047-merged.mount: Deactivated successfully.
Jan 26 13:11:42 np0005596060 podman[263794]: 2026-01-26 18:11:42.959771273 +0000 UTC m=+0.932170601 container remove 45af1fda85bf919e9c1024ad239b87e57486b81a912e2ff4bd3e4c5501efbd54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shannon, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:11:42 np0005596060 systemd[1]: libpod-conmon-45af1fda85bf919e9c1024ad239b87e57486b81a912e2ff4bd3e4c5501efbd54.scope: Deactivated successfully.
Jan 26 13:11:43 np0005596060 podman[263972]: 2026-01-26 18:11:43.5408622 +0000 UTC m=+0.024242097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:11:43 np0005596060 podman[263972]: 2026-01-26 18:11:43.690766491 +0000 UTC m=+0.174146368 container create 43c8be99d71ed1f1b5ac9b4019c52019e2ddadbc672bba44db89c5686f1ad9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:11:43 np0005596060 systemd[1]: Started libpod-conmon-43c8be99d71ed1f1b5ac9b4019c52019e2ddadbc672bba44db89c5686f1ad9be.scope.
Jan 26 13:11:43 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:11:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:43.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:43 np0005596060 podman[263972]: 2026-01-26 18:11:43.923312467 +0000 UTC m=+0.406692374 container init 43c8be99d71ed1f1b5ac9b4019c52019e2ddadbc672bba44db89c5686f1ad9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 13:11:43 np0005596060 podman[263972]: 2026-01-26 18:11:43.931686297 +0000 UTC m=+0.415066194 container start 43c8be99d71ed1f1b5ac9b4019c52019e2ddadbc672bba44db89c5686f1ad9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_joliot, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:11:43 np0005596060 quirky_joliot[263989]: 167 167
Jan 26 13:11:43 np0005596060 systemd[1]: libpod-43c8be99d71ed1f1b5ac9b4019c52019e2ddadbc672bba44db89c5686f1ad9be.scope: Deactivated successfully.
Jan 26 13:11:44 np0005596060 podman[263972]: 2026-01-26 18:11:44.054957181 +0000 UTC m=+0.538337098 container attach 43c8be99d71ed1f1b5ac9b4019c52019e2ddadbc672bba44db89c5686f1ad9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_joliot, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Jan 26 13:11:44 np0005596060 podman[263972]: 2026-01-26 18:11:44.055469163 +0000 UTC m=+0.538849080 container died 43c8be99d71ed1f1b5ac9b4019c52019e2ddadbc672bba44db89c5686f1ad9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 936 KiB/s rd, 1.9 KiB/s wr, 84 op/s
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:11:44
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'volumes', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.control']
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:11:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:11:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:44.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d384607c8f4ace0196184cc7fe2f4d692e3089b1c7c3e168db3e7d7b486afe34-merged.mount: Deactivated successfully.
Jan 26 13:11:45 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:11:45.330 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:11:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 26 13:11:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 26 13:11:45 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 26 13:11:45 np0005596060 podman[263972]: 2026-01-26 18:11:45.693995724 +0000 UTC m=+2.177375641 container remove 43c8be99d71ed1f1b5ac9b4019c52019e2ddadbc672bba44db89c5686f1ad9be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:11:45 np0005596060 systemd[1]: libpod-conmon-43c8be99d71ed1f1b5ac9b4019c52019e2ddadbc672bba44db89c5686f1ad9be.scope: Deactivated successfully.
Jan 26 13:11:45 np0005596060 nova_compute[247421]: 2026-01-26 18:11:45.851 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:45.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:45 np0005596060 podman[264016]: 2026-01-26 18:11:45.977681951 +0000 UTC m=+0.114841924 container create a8b55098d0e38f693481e7bc646f895debe054252cb735da8577f7c2a87ab6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:11:45 np0005596060 podman[264016]: 2026-01-26 18:11:45.892321116 +0000 UTC m=+0.029481149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:11:46 np0005596060 systemd[1]: Started libpod-conmon-a8b55098d0e38f693481e7bc646f895debe054252cb735da8577f7c2a87ab6b0.scope.
Jan 26 13:11:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 122 B/s wr, 46 op/s
Jan 26 13:11:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:11:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45bd6d52149cac7da20d42acb7a1fbaea1908559e20d7ff5f1225e3303dd8b89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45bd6d52149cac7da20d42acb7a1fbaea1908559e20d7ff5f1225e3303dd8b89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45bd6d52149cac7da20d42acb7a1fbaea1908559e20d7ff5f1225e3303dd8b89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45bd6d52149cac7da20d42acb7a1fbaea1908559e20d7ff5f1225e3303dd8b89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:11:46 np0005596060 podman[264016]: 2026-01-26 18:11:46.094662687 +0000 UTC m=+0.231822680 container init a8b55098d0e38f693481e7bc646f895debe054252cb735da8577f7c2a87ab6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:11:46 np0005596060 podman[264016]: 2026-01-26 18:11:46.10477155 +0000 UTC m=+0.241931523 container start a8b55098d0e38f693481e7bc646f895debe054252cb735da8577f7c2a87ab6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:11:46 np0005596060 podman[264016]: 2026-01-26 18:11:46.109756805 +0000 UTC m=+0.246916828 container attach a8b55098d0e38f693481e7bc646f895debe054252cb735da8577f7c2a87ab6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:11:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:46.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:46 np0005596060 jovial_hopper[264032]: {
Jan 26 13:11:46 np0005596060 jovial_hopper[264032]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:11:46 np0005596060 jovial_hopper[264032]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:11:46 np0005596060 jovial_hopper[264032]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:11:46 np0005596060 jovial_hopper[264032]:        "osd_id": 1,
Jan 26 13:11:46 np0005596060 jovial_hopper[264032]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:11:46 np0005596060 jovial_hopper[264032]:        "type": "bluestore"
Jan 26 13:11:46 np0005596060 jovial_hopper[264032]:    }
Jan 26 13:11:46 np0005596060 jovial_hopper[264032]: }
Jan 26 13:11:47 np0005596060 systemd[1]: libpod-a8b55098d0e38f693481e7bc646f895debe054252cb735da8577f7c2a87ab6b0.scope: Deactivated successfully.
Jan 26 13:11:47 np0005596060 nova_compute[247421]: 2026-01-26 18:11:47.057 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:47 np0005596060 podman[264053]: 2026-01-26 18:11:47.076248624 +0000 UTC m=+0.031091909 container died a8b55098d0e38f693481e7bc646f895debe054252cb735da8577f7c2a87ab6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 13:11:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-45bd6d52149cac7da20d42acb7a1fbaea1908559e20d7ff5f1225e3303dd8b89-merged.mount: Deactivated successfully.
Jan 26 13:11:47 np0005596060 podman[264053]: 2026-01-26 18:11:47.136707606 +0000 UTC m=+0.091550861 container remove a8b55098d0e38f693481e7bc646f895debe054252cb735da8577f7c2a87ab6b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_hopper, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:11:47 np0005596060 systemd[1]: libpod-conmon-a8b55098d0e38f693481e7bc646f895debe054252cb735da8577f7c2a87ab6b0.scope: Deactivated successfully.
Jan 26 13:11:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:11:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:47.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 818 B/s wr, 107 op/s
Jan 26 13:11:48 np0005596060 nova_compute[247421]: 2026-01-26 18:11:48.158 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769451093.1565104, 269591ef-171e-4d4b-9fa0-97cd49fa40d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:11:48 np0005596060 nova_compute[247421]: 2026-01-26 18:11:48.159 247428 INFO nova.compute.manager [-] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:11:48 np0005596060 nova_compute[247421]: 2026-01-26 18:11:48.181 247428 DEBUG nova.compute.manager [None req-d77d10e0-4862-45ae-af7a-803dbd28c81c - - - - - -] [instance: 269591ef-171e-4d4b-9fa0-97cd49fa40d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:11:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:11:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9c3811ac-a6f7-4a07-ace1-f957eea76fdd does not exist
Jan 26 13:11:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7c0e4cd1-1493-452c-863f-94a2fcfdb0c4 does not exist
Jan 26 13:11:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 643c1dd4-c569-4cb3-a068-7f4fb1b5e78f does not exist
Jan 26 13:11:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:48.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:11:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:49.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 818 B/s wr, 107 op/s
Jan 26 13:11:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:50.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:50 np0005596060 nova_compute[247421]: 2026-01-26 18:11:50.853 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:51.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:52 np0005596060 nova_compute[247421]: 2026-01-26 18:11:52.061 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 KiB/s wr, 112 op/s
Jan 26 13:11:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:52.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:53 np0005596060 nova_compute[247421]: 2026-01-26 18:11:53.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:53.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.4 KiB/s wr, 80 op/s
Jan 26 13:11:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:54.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:55 np0005596060 nova_compute[247421]: 2026-01-26 18:11:55.561 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:55 np0005596060 nova_compute[247421]: 2026-01-26 18:11:55.562 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:11:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:11:55 np0005596060 nova_compute[247421]: 2026-01-26 18:11:55.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:55 np0005596060 nova_compute[247421]: 2026-01-26 18:11:55.855 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:55.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.3 KiB/s wr, 77 op/s
Jan 26 13:11:56 np0005596060 nova_compute[247421]: 2026-01-26 18:11:56.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:56 np0005596060 nova_compute[247421]: 2026-01-26 18:11:56.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:56.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:57 np0005596060 nova_compute[247421]: 2026-01-26 18:11:57.063 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:11:57 np0005596060 nova_compute[247421]: 2026-01-26 18:11:57.718 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:57 np0005596060 nova_compute[247421]: 2026-01-26 18:11:57.718 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 26 13:11:57 np0005596060 nova_compute[247421]: 2026-01-26 18:11:57.733 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 26 13:11:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:57.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:11:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 KiB/s wr, 67 op/s
Jan 26 13:11:58 np0005596060 nova_compute[247421]: 2026-01-26 18:11:58.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:58 np0005596060 nova_compute[247421]: 2026-01-26 18:11:58.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:58 np0005596060 nova_compute[247421]: 2026-01-26 18:11:58.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:58 np0005596060 nova_compute[247421]: 2026-01-26 18:11:58.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:58 np0005596060 nova_compute[247421]: 2026-01-26 18:11:58.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 26 13:11:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:11:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:11:58.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:11:59 np0005596060 nova_compute[247421]: 2026-01-26 18:11:59.852 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:11:59 np0005596060 nova_compute[247421]: 2026-01-26 18:11:59.852 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:11:59 np0005596060 nova_compute[247421]: 2026-01-26 18:11:59.852 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:11:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:11:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:11:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:11:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 7.1 KiB/s rd, 597 B/s wr, 10 op/s
Jan 26 13:12:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:00.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:00 np0005596060 nova_compute[247421]: 2026-01-26 18:12:00.857 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:00 np0005596060 nova_compute[247421]: 2026-01-26 18:12:00.918 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:12:00 np0005596060 nova_compute[247421]: 2026-01-26 18:12:00.918 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:12:00 np0005596060 nova_compute[247421]: 2026-01-26 18:12:00.919 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:12:00 np0005596060 nova_compute[247421]: 2026-01-26 18:12:00.948 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:12:00 np0005596060 nova_compute[247421]: 2026-01-26 18:12:00.948 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:12:00 np0005596060 nova_compute[247421]: 2026-01-26 18:12:00.948 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:12:00 np0005596060 nova_compute[247421]: 2026-01-26 18:12:00.949 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:12:00 np0005596060 nova_compute[247421]: 2026-01-26 18:12:00.949 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:12:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:12:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3219971851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:12:01 np0005596060 nova_compute[247421]: 2026-01-26 18:12:01.381 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:12:01 np0005596060 nova_compute[247421]: 2026-01-26 18:12:01.556 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:12:01 np0005596060 nova_compute[247421]: 2026-01-26 18:12:01.558 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4806MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:12:01 np0005596060 nova_compute[247421]: 2026-01-26 18:12:01.558 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:12:01 np0005596060 nova_compute[247421]: 2026-01-26 18:12:01.558 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:12:01 np0005596060 nova_compute[247421]: 2026-01-26 18:12:01.844 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:12:01 np0005596060 nova_compute[247421]: 2026-01-26 18:12:01.845 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:12:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:01.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:01 np0005596060 nova_compute[247421]: 2026-01-26 18:12:01.957 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:12:02 np0005596060 nova_compute[247421]: 2026-01-26 18:12:02.065 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 7.1 KiB/s rd, 597 B/s wr, 10 op/s
Jan 26 13:12:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:12:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584006124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:12:02 np0005596060 nova_compute[247421]: 2026-01-26 18:12:02.384 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:12:02 np0005596060 nova_compute[247421]: 2026-01-26 18:12:02.390 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:12:02 np0005596060 nova_compute[247421]: 2026-01-26 18:12:02.414 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:12:02 np0005596060 nova_compute[247421]: 2026-01-26 18:12:02.449 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:12:02 np0005596060 nova_compute[247421]: 2026-01-26 18:12:02.450 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:12:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:02.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:12:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:12:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:03.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail; 4.6 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 13:12:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:04.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:05 np0005596060 nova_compute[247421]: 2026-01-26 18:12:05.859 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:05.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:12:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:06.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:07 np0005596060 nova_compute[247421]: 2026-01-26 18:12:07.068 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:07.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:12:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:08.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:12:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:10.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:10 np0005596060 podman[264275]: 2026-01-26 18:12:10.828267364 +0000 UTC m=+0.084860034 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 13:12:10 np0005596060 podman[264276]: 2026-01-26 18:12:10.845386322 +0000 UTC m=+0.098562827 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:12:10 np0005596060 nova_compute[247421]: 2026-01-26 18:12:10.861 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:11.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:12 np0005596060 nova_compute[247421]: 2026-01-26 18:12:12.071 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:12:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:12.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:13.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:12:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:12:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:12:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:12:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:12:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:12:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:12:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:14.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:12:14.743 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:12:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:12:14.744 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:12:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:12:14.744 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:12:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:15 np0005596060 nova_compute[247421]: 2026-01-26 18:12:15.866 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:15.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:12:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:17 np0005596060 nova_compute[247421]: 2026-01-26 18:12:17.074 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:17.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:12:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:19 np0005596060 ovn_controller[148842]: 2026-01-26T18:12:19Z|00099|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 26 13:12:19 np0005596060 nova_compute[247421]: 2026-01-26 18:12:19.771 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:12:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:19.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 41 MiB data, 260 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:12:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:20.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:20 np0005596060 nova_compute[247421]: 2026-01-26 18:12:20.869 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:21.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:22 np0005596060 nova_compute[247421]: 2026-01-26 18:12:22.076 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 53 MiB data, 267 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 607 KiB/s wr, 3 op/s
Jan 26 13:12:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:22.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:23.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 71 MiB data, 267 MiB used, 21 GiB / 21 GiB avail; 8.9 KiB/s rd, 1.1 MiB/s wr, 16 op/s
Jan 26 13:12:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:24.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:25 np0005596060 nova_compute[247421]: 2026-01-26 18:12:25.870 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:25.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 71 MiB data, 267 MiB used, 21 GiB / 21 GiB avail; 8.9 KiB/s rd, 1.1 MiB/s wr, 16 op/s
Jan 26 13:12:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:26.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:27 np0005596060 nova_compute[247421]: 2026-01-26 18:12:27.078 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:27.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:12:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:28.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:29.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:12:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:30.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:30 np0005596060 nova_compute[247421]: 2026-01-26 18:12:30.872 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:31.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:32 np0005596060 nova_compute[247421]: 2026-01-26 18:12:32.081 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 299 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 26 13:12:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:32.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:33.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 818 KiB/s rd, 1.2 MiB/s wr, 59 op/s
Jan 26 13:12:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:34.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:35 np0005596060 nova_compute[247421]: 2026-01-26 18:12:35.875 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:35.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 811 KiB/s rd, 671 KiB/s wr, 46 op/s
Jan 26 13:12:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:36.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:37 np0005596060 nova_compute[247421]: 2026-01-26 18:12:37.084 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:37.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 671 KiB/s wr, 84 op/s
Jan 26 13:12:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:38.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:39.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:12:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:40.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:40 np0005596060 nova_compute[247421]: 2026-01-26 18:12:40.878 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:41 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:12:41.049 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:12:41 np0005596060 nova_compute[247421]: 2026-01-26 18:12:41.050 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:41 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:12:41.050 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:12:41 np0005596060 podman[264383]: 2026-01-26 18:12:41.813315075 +0000 UTC m=+0.062804942 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 13:12:41 np0005596060 podman[264384]: 2026-01-26 18:12:41.837652394 +0000 UTC m=+0.086663560 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:12:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:41.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:12:42 np0005596060 nova_compute[247421]: 2026-01-26 18:12:42.133 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:42.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:43.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:12:44
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control']
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 56 op/s
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:12:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:44.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:12:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:12:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:45 np0005596060 nova_compute[247421]: 2026-01-26 18:12:45.878 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:45.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 88 MiB data, 281 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 38 op/s
Jan 26 13:12:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:46.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:47 np0005596060 nova_compute[247421]: 2026-01-26 18:12:47.136 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 26 13:12:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:47.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 113 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 88 op/s
Jan 26 13:12:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 26 13:12:48 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 26 13:12:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:48.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:12:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:12:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:12:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:12:50.052 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:12:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 113 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 MiB/s wr, 60 op/s
Jan 26 13:12:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:12:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:50.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:50 np0005596060 nova_compute[247421]: 2026-01-26 18:12:50.880 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:50 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:50 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:50 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:50 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 139 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 86 op/s
Jan 26 13:12:52 np0005596060 nova_compute[247421]: 2026-01-26 18:12:52.140 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:52.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 26 13:12:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 26 13:12:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 26 13:12:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:53.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:12:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 141 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 98 op/s
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7149c348-63c3-4060-adb5-c5b01cae64ea does not exist
Jan 26 13:12:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev a3ea1177-4243-47b2-9338-0390124c7c0a does not exist
Jan 26 13:12:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e44deb8e-2ca4-47c0-9152-ddac775e2d17 does not exist
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:12:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:12:54 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 26 13:12:54 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 26 13:12:54 np0005596060 nova_compute[247421]: 2026-01-26 18:12:54.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:12:54 np0005596060 nova_compute[247421]: 2026-01-26 18:12:54.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:12:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:54.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:54 np0005596060 podman[264870]: 2026-01-26 18:12:54.896446603 +0000 UTC m=+0.026095264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:12:55 np0005596060 podman[264870]: 2026-01-26 18:12:55.083965253 +0000 UTC m=+0.213613864 container create 97c71cc96ea28bfa25f99c94bd9b12e47af36871497f6d1a17f4150acd43d986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:12:55 np0005596060 systemd[1]: Started libpod-conmon-97c71cc96ea28bfa25f99c94bd9b12e47af36871497f6d1a17f4150acd43d986.scope.
Jan 26 13:12:55 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:12:55 np0005596060 podman[264870]: 2026-01-26 18:12:55.439133128 +0000 UTC m=+0.568781759 container init 97c71cc96ea28bfa25f99c94bd9b12e47af36871497f6d1a17f4150acd43d986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 13:12:55 np0005596060 podman[264870]: 2026-01-26 18:12:55.447033716 +0000 UTC m=+0.576682347 container start 97c71cc96ea28bfa25f99c94bd9b12e47af36871497f6d1a17f4150acd43d986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 13:12:55 np0005596060 lucid_mcclintock[264886]: 167 167
Jan 26 13:12:55 np0005596060 systemd[1]: libpod-97c71cc96ea28bfa25f99c94bd9b12e47af36871497f6d1a17f4150acd43d986.scope: Deactivated successfully.
Jan 26 13:12:55 np0005596060 conmon[264886]: conmon 97c71cc96ea28bfa25f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97c71cc96ea28bfa25f99c94bd9b12e47af36871497f6d1a17f4150acd43d986.scope/container/memory.events
Jan 26 13:12:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:12:55 np0005596060 podman[264870]: 2026-01-26 18:12:55.768867517 +0000 UTC m=+0.898516168 container attach 97c71cc96ea28bfa25f99c94bd9b12e47af36871497f6d1a17f4150acd43d986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 13:12:55 np0005596060 podman[264870]: 2026-01-26 18:12:55.770530219 +0000 UTC m=+0.900178830 container died 97c71cc96ea28bfa25f99c94bd9b12e47af36871497f6d1a17f4150acd43d986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:12:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:12:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:12:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:12:55 np0005596060 systemd[1]: var-lib-containers-storage-overlay-53035c5d046de6d5308400fa701a9adbf06c2b7e25afe1e380cb80fb81aa91e5-merged.mount: Deactivated successfully.
Jan 26 13:12:55 np0005596060 podman[264870]: 2026-01-26 18:12:55.825156705 +0000 UTC m=+0.954805306 container remove 97c71cc96ea28bfa25f99c94bd9b12e47af36871497f6d1a17f4150acd43d986 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_mcclintock, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:12:55 np0005596060 systemd[1]: libpod-conmon-97c71cc96ea28bfa25f99c94bd9b12e47af36871497f6d1a17f4150acd43d986.scope: Deactivated successfully.
Jan 26 13:12:55 np0005596060 nova_compute[247421]: 2026-01-26 18:12:55.889 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:12:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:55.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:12:56 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 26 13:12:56 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 26 13:12:56 np0005596060 podman[264912]: 2026-01-26 18:12:56.023726943 +0000 UTC m=+0.052441603 container create a122868a687f417d2919bb992bbf2edd566010c4b4471e99a26a73657f062c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 13:12:56 np0005596060 systemd[1]: Started libpod-conmon-a122868a687f417d2919bb992bbf2edd566010c4b4471e99a26a73657f062c05.scope.
Jan 26 13:12:56 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:12:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 141 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 98 op/s
Jan 26 13:12:56 np0005596060 podman[264912]: 2026-01-26 18:12:56.002853621 +0000 UTC m=+0.031568301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:12:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d0f78391c78d9e61ad7eebf236f307d1844dcc1fc2a348274323e23b65510f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:12:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d0f78391c78d9e61ad7eebf236f307d1844dcc1fc2a348274323e23b65510f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:12:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d0f78391c78d9e61ad7eebf236f307d1844dcc1fc2a348274323e23b65510f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:12:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d0f78391c78d9e61ad7eebf236f307d1844dcc1fc2a348274323e23b65510f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:12:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d0f78391c78d9e61ad7eebf236f307d1844dcc1fc2a348274323e23b65510f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:12:56 np0005596060 podman[264912]: 2026-01-26 18:12:56.166880384 +0000 UTC m=+0.195595124 container init a122868a687f417d2919bb992bbf2edd566010c4b4471e99a26a73657f062c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:12:56 np0005596060 podman[264912]: 2026-01-26 18:12:56.176115125 +0000 UTC m=+0.204829785 container start a122868a687f417d2919bb992bbf2edd566010c4b4471e99a26a73657f062c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:12:56 np0005596060 podman[264912]: 2026-01-26 18:12:56.180366712 +0000 UTC m=+0.209081372 container attach a122868a687f417d2919bb992bbf2edd566010c4b4471e99a26a73657f062c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:12:56 np0005596060 nova_compute[247421]: 2026-01-26 18:12:56.647 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:12:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.003000075s ======
Jan 26 13:12:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:56.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000075s
Jan 26 13:12:56 np0005596060 sharp_easley[264928]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:12:56 np0005596060 sharp_easley[264928]: --> relative data size: 1.0
Jan 26 13:12:57 np0005596060 sharp_easley[264928]: --> All data devices are unavailable
Jan 26 13:12:57 np0005596060 systemd[1]: libpod-a122868a687f417d2919bb992bbf2edd566010c4b4471e99a26a73657f062c05.scope: Deactivated successfully.
Jan 26 13:12:57 np0005596060 podman[264912]: 2026-01-26 18:12:57.025601967 +0000 UTC m=+1.054316627 container died a122868a687f417d2919bb992bbf2edd566010c4b4471e99a26a73657f062c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 13:12:57 np0005596060 nova_compute[247421]: 2026-01-26 18:12:57.144 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:12:57 np0005596060 nova_compute[247421]: 2026-01-26 18:12:57.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:12:57 np0005596060 systemd[1]: var-lib-containers-storage-overlay-02d0f78391c78d9e61ad7eebf236f307d1844dcc1fc2a348274323e23b65510f-merged.mount: Deactivated successfully.
Jan 26 13:12:57 np0005596060 podman[264912]: 2026-01-26 18:12:57.828669477 +0000 UTC m=+1.857384137 container remove a122868a687f417d2919bb992bbf2edd566010c4b4471e99a26a73657f062c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_easley, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:12:57 np0005596060 systemd[1]: libpod-conmon-a122868a687f417d2919bb992bbf2edd566010c4b4471e99a26a73657f062c05.scope: Deactivated successfully.
Jan 26 13:12:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:12:58.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 141 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 955 KiB/s rd, 2.2 MiB/s wr, 91 op/s
Jan 26 13:12:58 np0005596060 podman[265098]: 2026-01-26 18:12:58.530341481 +0000 UTC m=+0.035751495 container create a2a4f19dbaa11aa2561da26cb3882661be1741445e238570b9697610c7d846d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:12:58 np0005596060 systemd[1]: Started libpod-conmon-a2a4f19dbaa11aa2561da26cb3882661be1741445e238570b9697610c7d846d7.scope.
Jan 26 13:12:58 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:12:58 np0005596060 podman[265098]: 2026-01-26 18:12:58.513254023 +0000 UTC m=+0.018664057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:12:58 np0005596060 podman[265098]: 2026-01-26 18:12:58.616545868 +0000 UTC m=+0.121955882 container init a2a4f19dbaa11aa2561da26cb3882661be1741445e238570b9697610c7d846d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 13:12:58 np0005596060 podman[265098]: 2026-01-26 18:12:58.625012639 +0000 UTC m=+0.130422653 container start a2a4f19dbaa11aa2561da26cb3882661be1741445e238570b9697610c7d846d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:12:58 np0005596060 podman[265098]: 2026-01-26 18:12:58.62785479 +0000 UTC m=+0.133264814 container attach a2a4f19dbaa11aa2561da26cb3882661be1741445e238570b9697610c7d846d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:12:58 np0005596060 dazzling_dhawan[265114]: 167 167
Jan 26 13:12:58 np0005596060 systemd[1]: libpod-a2a4f19dbaa11aa2561da26cb3882661be1741445e238570b9697610c7d846d7.scope: Deactivated successfully.
Jan 26 13:12:58 np0005596060 podman[265098]: 2026-01-26 18:12:58.632680291 +0000 UTC m=+0.138090335 container died a2a4f19dbaa11aa2561da26cb3882661be1741445e238570b9697610c7d846d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:12:58 np0005596060 nova_compute[247421]: 2026-01-26 18:12:58.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:12:58 np0005596060 systemd[1]: var-lib-containers-storage-overlay-93aa43d608dff1fcb78461e6dd26e5eb69dad209fc57c52624ab33de14e8500f-merged.mount: Deactivated successfully.
Jan 26 13:12:58 np0005596060 podman[265098]: 2026-01-26 18:12:58.681343889 +0000 UTC m=+0.186753903 container remove a2a4f19dbaa11aa2561da26cb3882661be1741445e238570b9697610c7d846d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 13:12:58 np0005596060 systemd[1]: libpod-conmon-a2a4f19dbaa11aa2561da26cb3882661be1741445e238570b9697610c7d846d7.scope: Deactivated successfully.
Jan 26 13:12:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:12:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:12:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:12:58.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:12:58 np0005596060 podman[265138]: 2026-01-26 18:12:58.874943512 +0000 UTC m=+0.054983097 container create f6c8714d658fb2e40267d21aaac2f7bd1d3eae3f832fe127c0e7052a88f2b658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 13:12:58 np0005596060 systemd[1]: Started libpod-conmon-f6c8714d658fb2e40267d21aaac2f7bd1d3eae3f832fe127c0e7052a88f2b658.scope.
Jan 26 13:12:58 np0005596060 podman[265138]: 2026-01-26 18:12:58.84568538 +0000 UTC m=+0.025724945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:12:58 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:12:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d27fffe45453dec90a4c482adff45e4f92bf65df69ab8dec3587b45cf48648/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:12:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d27fffe45453dec90a4c482adff45e4f92bf65df69ab8dec3587b45cf48648/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:12:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d27fffe45453dec90a4c482adff45e4f92bf65df69ab8dec3587b45cf48648/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:12:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d27fffe45453dec90a4c482adff45e4f92bf65df69ab8dec3587b45cf48648/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:12:58 np0005596060 podman[265138]: 2026-01-26 18:12:58.977384245 +0000 UTC m=+0.157423810 container init f6c8714d658fb2e40267d21aaac2f7bd1d3eae3f832fe127c0e7052a88f2b658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:12:58 np0005596060 podman[265138]: 2026-01-26 18:12:58.984302107 +0000 UTC m=+0.164341652 container start f6c8714d658fb2e40267d21aaac2f7bd1d3eae3f832fe127c0e7052a88f2b658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 26 13:12:58 np0005596060 podman[265138]: 2026-01-26 18:12:58.989373214 +0000 UTC m=+0.169412759 container attach f6c8714d658fb2e40267d21aaac2f7bd1d3eae3f832fe127c0e7052a88f2b658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 26 13:12:59 np0005596060 nova_compute[247421]: 2026-01-26 18:12:59.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:12:59 np0005596060 nova_compute[247421]: 2026-01-26 18:12:59.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:12:59 np0005596060 nova_compute[247421]: 2026-01-26 18:12:59.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:12:59 np0005596060 nova_compute[247421]: 2026-01-26 18:12:59.742 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:12:59 np0005596060 nova_compute[247421]: 2026-01-26 18:12:59.743 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:12:59 np0005596060 nova_compute[247421]: 2026-01-26 18:12:59.743 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]: {
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:    "1": [
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:        {
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "devices": [
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "/dev/loop3"
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            ],
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "lv_name": "ceph_lv0",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "lv_size": "7511998464",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "name": "ceph_lv0",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "tags": {
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.cluster_name": "ceph",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.crush_device_class": "",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.encrypted": "0",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.osd_id": "1",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.type": "block",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:                "ceph.vdo": "0"
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            },
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "type": "block",
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:            "vg_name": "ceph_vg0"
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:        }
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]:    ]
Jan 26 13:12:59 np0005596060 ecstatic_bohr[265154]: }
Jan 26 13:12:59 np0005596060 podman[265138]: 2026-01-26 18:12:59.768592297 +0000 UTC m=+0.948631852 container died f6c8714d658fb2e40267d21aaac2f7bd1d3eae3f832fe127c0e7052a88f2b658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 26 13:12:59 np0005596060 systemd[1]: libpod-f6c8714d658fb2e40267d21aaac2f7bd1d3eae3f832fe127c0e7052a88f2b658.scope: Deactivated successfully.
Jan 26 13:12:59 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f6d27fffe45453dec90a4c482adff45e4f92bf65df69ab8dec3587b45cf48648-merged.mount: Deactivated successfully.
Jan 26 13:12:59 np0005596060 podman[265138]: 2026-01-26 18:12:59.837790358 +0000 UTC m=+1.017829913 container remove f6c8714d658fb2e40267d21aaac2f7bd1d3eae3f832fe127c0e7052a88f2b658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 13:12:59 np0005596060 systemd[1]: libpod-conmon-f6c8714d658fb2e40267d21aaac2f7bd1d3eae3f832fe127c0e7052a88f2b658.scope: Deactivated successfully.
Jan 26 13:13:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:00.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 141 MiB data, 331 MiB used, 21 GiB / 21 GiB avail; 810 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 26 13:13:00 np0005596060 podman[265315]: 2026-01-26 18:13:00.522803385 +0000 UTC m=+0.047320544 container create 31470e8ef12c79018e44c693f4854e121aa32e2e5fc629ff55847f665ee9d094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:13:00 np0005596060 systemd[1]: Started libpod-conmon-31470e8ef12c79018e44c693f4854e121aa32e2e5fc629ff55847f665ee9d094.scope.
Jan 26 13:13:00 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:13:00 np0005596060 podman[265315]: 2026-01-26 18:13:00.50301987 +0000 UTC m=+0.027537049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:13:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:00 np0005596060 nova_compute[247421]: 2026-01-26 18:13:00.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:13:00 np0005596060 nova_compute[247421]: 2026-01-26 18:13:00.672 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:13:00 np0005596060 nova_compute[247421]: 2026-01-26 18:13:00.673 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:13:00 np0005596060 nova_compute[247421]: 2026-01-26 18:13:00.673 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:13:00 np0005596060 nova_compute[247421]: 2026-01-26 18:13:00.673 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:13:00 np0005596060 nova_compute[247421]: 2026-01-26 18:13:00.674 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:13:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:00.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:00 np0005596060 podman[265315]: 2026-01-26 18:13:00.790140453 +0000 UTC m=+0.314657632 container init 31470e8ef12c79018e44c693f4854e121aa32e2e5fc629ff55847f665ee9d094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 13:13:00 np0005596060 podman[265315]: 2026-01-26 18:13:00.80437796 +0000 UTC m=+0.328895119 container start 31470e8ef12c79018e44c693f4854e121aa32e2e5fc629ff55847f665ee9d094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:13:00 np0005596060 trusting_brattain[265331]: 167 167
Jan 26 13:13:00 np0005596060 systemd[1]: libpod-31470e8ef12c79018e44c693f4854e121aa32e2e5fc629ff55847f665ee9d094.scope: Deactivated successfully.
Jan 26 13:13:00 np0005596060 conmon[265331]: conmon 31470e8ef12c79018e44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31470e8ef12c79018e44c693f4854e121aa32e2e5fc629ff55847f665ee9d094.scope/container/memory.events
Jan 26 13:13:00 np0005596060 podman[265315]: 2026-01-26 18:13:00.827780265 +0000 UTC m=+0.352297434 container attach 31470e8ef12c79018e44c693f4854e121aa32e2e5fc629ff55847f665ee9d094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:13:00 np0005596060 podman[265315]: 2026-01-26 18:13:00.828706958 +0000 UTC m=+0.353224127 container died 31470e8ef12c79018e44c693f4854e121aa32e2e5fc629ff55847f665ee9d094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:13:00 np0005596060 nova_compute[247421]: 2026-01-26 18:13:00.892 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:13:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/866658922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:13:01 np0005596060 nova_compute[247421]: 2026-01-26 18:13:01.107 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:13:01 np0005596060 nova_compute[247421]: 2026-01-26 18:13:01.354 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:13:01 np0005596060 nova_compute[247421]: 2026-01-26 18:13:01.355 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4773MB free_disk=20.94280242919922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:13:01 np0005596060 nova_compute[247421]: 2026-01-26 18:13:01.356 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:13:01 np0005596060 nova_compute[247421]: 2026-01-26 18:13:01.356 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:13:01 np0005596060 nova_compute[247421]: 2026-01-26 18:13:01.436 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:13:01 np0005596060 nova_compute[247421]: 2026-01-26 18:13:01.436 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:13:01 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c0f5a845c2c757a9a963af8bb20865a60a2c6a0420c2d17d2fd003018535fdf9-merged.mount: Deactivated successfully.
Jan 26 13:13:01 np0005596060 nova_compute[247421]: 2026-01-26 18:13:01.581 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:13:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:02.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:13:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/56679581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:13:02 np0005596060 nova_compute[247421]: 2026-01-26 18:13:02.091 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:13:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 141 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 834 KiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 26 13:13:02 np0005596060 nova_compute[247421]: 2026-01-26 18:13:02.099 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:13:02 np0005596060 nova_compute[247421]: 2026-01-26 18:13:02.120 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:13:02 np0005596060 nova_compute[247421]: 2026-01-26 18:13:02.124 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:13:02 np0005596060 nova_compute[247421]: 2026-01-26 18:13:02.125 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:13:02 np0005596060 nova_compute[247421]: 2026-01-26 18:13:02.148 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:02 np0005596060 podman[265315]: 2026-01-26 18:13:02.606528623 +0000 UTC m=+2.131045782 container remove 31470e8ef12c79018e44c693f4854e121aa32e2e5fc629ff55847f665ee9d094 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_brattain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:13:02 np0005596060 systemd[1]: libpod-conmon-31470e8ef12c79018e44c693f4854e121aa32e2e5fc629ff55847f665ee9d094.scope: Deactivated successfully.
Jan 26 13:13:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:02.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:02 np0005596060 podman[265400]: 2026-01-26 18:13:02.785959672 +0000 UTC m=+0.054227078 container create fef1e4dc5dfd161d60623320b56a36000d9e9758c607ad0bc408782935b06df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 13:13:02 np0005596060 systemd[1]: Started libpod-conmon-fef1e4dc5dfd161d60623320b56a36000d9e9758c607ad0bc408782935b06df5.scope.
Jan 26 13:13:02 np0005596060 podman[265400]: 2026-01-26 18:13:02.757442579 +0000 UTC m=+0.025710025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:13:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:13:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713920ebae6a48c4e25e1f84b8d7f244c8e3bd934b7f32f7c3301f1baac0959e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:13:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713920ebae6a48c4e25e1f84b8d7f244c8e3bd934b7f32f7c3301f1baac0959e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:13:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713920ebae6a48c4e25e1f84b8d7f244c8e3bd934b7f32f7c3301f1baac0959e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:13:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/713920ebae6a48c4e25e1f84b8d7f244c8e3bd934b7f32f7c3301f1baac0959e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:13:02 np0005596060 podman[265400]: 2026-01-26 18:13:02.907354449 +0000 UTC m=+0.175621835 container init fef1e4dc5dfd161d60623320b56a36000d9e9758c607ad0bc408782935b06df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:13:02 np0005596060 podman[265400]: 2026-01-26 18:13:02.919964704 +0000 UTC m=+0.188232070 container start fef1e4dc5dfd161d60623320b56a36000d9e9758c607ad0bc408782935b06df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 13:13:02 np0005596060 podman[265400]: 2026-01-26 18:13:02.924228461 +0000 UTC m=+0.192495827 container attach fef1e4dc5dfd161d60623320b56a36000d9e9758c607ad0bc408782935b06df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:13:03 np0005596060 nova_compute[247421]: 2026-01-26 18:13:03.126 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002166867322724735 of space, bias 1.0, pg target 0.6500601968174204 quantized to 32 (current 32)
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:13:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:13:03 np0005596060 pedantic_albattani[265416]: {
Jan 26 13:13:03 np0005596060 pedantic_albattani[265416]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:13:03 np0005596060 pedantic_albattani[265416]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:13:03 np0005596060 pedantic_albattani[265416]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:13:03 np0005596060 pedantic_albattani[265416]:        "osd_id": 1,
Jan 26 13:13:03 np0005596060 pedantic_albattani[265416]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:13:03 np0005596060 pedantic_albattani[265416]:        "type": "bluestore"
Jan 26 13:13:03 np0005596060 pedantic_albattani[265416]:    }
Jan 26 13:13:03 np0005596060 pedantic_albattani[265416]: }
Jan 26 13:13:03 np0005596060 systemd[1]: libpod-fef1e4dc5dfd161d60623320b56a36000d9e9758c607ad0bc408782935b06df5.scope: Deactivated successfully.
Jan 26 13:13:03 np0005596060 podman[265400]: 2026-01-26 18:13:03.853584231 +0000 UTC m=+1.121851597 container died fef1e4dc5dfd161d60623320b56a36000d9e9758c607ad0bc408782935b06df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:13:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:04.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-713920ebae6a48c4e25e1f84b8d7f244c8e3bd934b7f32f7c3301f1baac0959e-merged.mount: Deactivated successfully.
Jan 26 13:13:04 np0005596060 podman[265400]: 2026-01-26 18:13:04.064485197 +0000 UTC m=+1.332752573 container remove fef1e4dc5dfd161d60623320b56a36000d9e9758c607ad0bc408782935b06df5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 13:13:04 np0005596060 systemd[1]: libpod-conmon-fef1e4dc5dfd161d60623320b56a36000d9e9758c607ad0bc408782935b06df5.scope: Deactivated successfully.
Jan 26 13:13:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 141 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 111 KiB/s rd, 42 KiB/s wr, 159 op/s
Jan 26 13:13:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:13:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:13:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:13:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:13:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev beda12e2-a824-4b6c-a040-ea024c6edf10 does not exist
Jan 26 13:13:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 739fa6ed-8897-41d5-af96-a4de78e419c3 does not exist
Jan 26 13:13:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 98b95c9a-5d7d-42f6-937f-5de65e4ce961 does not exist
Jan 26 13:13:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:04.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:13:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:13:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:05 np0005596060 nova_compute[247421]: 2026-01-26 18:13:05.895 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:06.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 141 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 90 KiB/s rd, 11 KiB/s wr, 150 op/s
Jan 26 13:13:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 26 13:13:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 26 13:13:06 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 26 13:13:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:06.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:07 np0005596060 nova_compute[247421]: 2026-01-26 18:13:07.152 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:08.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 46 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 2.3 KiB/s wr, 177 op/s
Jan 26 13:13:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:08.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:10.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 46 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 110 KiB/s rd, 2.3 KiB/s wr, 177 op/s
Jan 26 13:13:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:10.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:10 np0005596060 nova_compute[247421]: 2026-01-26 18:13:10.896 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:12.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 41 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 76 KiB/s rd, 2.8 KiB/s wr, 116 op/s
Jan 26 13:13:12 np0005596060 nova_compute[247421]: 2026-01-26 18:13:12.156 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:12.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:12 np0005596060 podman[265554]: 2026-01-26 18:13:12.841300794 +0000 UTC m=+0.094261129 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 26 13:13:12 np0005596060 podman[265555]: 2026-01-26 18:13:12.854940155 +0000 UTC m=+0.097711275 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 13:13:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:14.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:13:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:13:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:13:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:13:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:13:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:13:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 41 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 65 op/s
Jan 26 13:13:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:13:14.743 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:13:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:13:14.744 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:13:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:13:14.744 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:13:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:14.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 26 13:13:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 26 13:13:15 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 26 13:13:15 np0005596060 nova_compute[247421]: 2026-01-26 18:13:15.897 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:16.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 41 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Jan 26 13:13:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:16.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:17 np0005596060 nova_compute[247421]: 2026-01-26 18:13:17.159 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:18.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 41 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 511 B/s wr, 14 op/s
Jan 26 13:13:18 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:13:18.394 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:13:18 np0005596060 nova_compute[247421]: 2026-01-26 18:13:18.395 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:18 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:13:18.396 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:13:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:18.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:20.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 41 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 511 B/s wr, 14 op/s
Jan 26 13:13:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:20.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:20 np0005596060 nova_compute[247421]: 2026-01-26 18:13:20.900 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:22.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 41 MiB data, 271 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:13:22 np0005596060 nova_compute[247421]: 2026-01-26 18:13:22.162 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:22.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:13:23.397 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:13:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:24.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:13:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:24.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:25 np0005596060 nova_compute[247421]: 2026-01-26 18:13:25.903 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:26.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:13:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:26.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:27 np0005596060 nova_compute[247421]: 2026-01-26 18:13:27.164 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:28.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:13:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:28.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:30.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:13:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:30.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:30 np0005596060 nova_compute[247421]: 2026-01-26 18:13:30.944 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:32.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:13:32 np0005596060 nova_compute[247421]: 2026-01-26 18:13:32.179 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:32.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:34.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:13:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:34.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:35 np0005596060 nova_compute[247421]: 2026-01-26 18:13:35.946 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:36.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:13:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:36.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:37 np0005596060 nova_compute[247421]: 2026-01-26 18:13:37.181 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:38.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:13:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:38.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:40.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:13:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:40.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:40 np0005596060 nova_compute[247421]: 2026-01-26 18:13:40.948 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:42.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Jan 26 13:13:42 np0005596060 nova_compute[247421]: 2026-01-26 18:13:42.184 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:42.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:43 np0005596060 podman[265663]: 2026-01-26 18:13:43.79773161 +0000 UTC m=+0.061339995 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 13:13:43 np0005596060 podman[265664]: 2026-01-26 18:13:43.857593138 +0000 UTC m=+0.116509446 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 13:13:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:44.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:13:44
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.data', 'images']
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:13:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:13:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:44.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:45 np0005596060 nova_compute[247421]: 2026-01-26 18:13:45.951 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:46.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Jan 26 13:13:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:46.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:47 np0005596060 nova_compute[247421]: 2026-01-26 18:13:47.187 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:48.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 26 13:13:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:48.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:50.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 7 op/s
Jan 26 13:13:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:50.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:50 np0005596060 nova_compute[247421]: 2026-01-26 18:13:50.956 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:52.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 66 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.1 MiB/s wr, 16 op/s
Jan 26 13:13:52 np0005596060 nova_compute[247421]: 2026-01-26 18:13:52.190 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:13:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:52.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:13:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:54.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 13:13:54 np0005596060 nova_compute[247421]: 2026-01-26 18:13:54.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:13:54 np0005596060 nova_compute[247421]: 2026-01-26 18:13:54.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:13:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:54.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:13:55 np0005596060 nova_compute[247421]: 2026-01-26 18:13:55.958 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:56.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 13:13:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:56.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:57 np0005596060 nova_compute[247421]: 2026-01-26 18:13:57.192 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:13:58.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 13:13:58 np0005596060 nova_compute[247421]: 2026-01-26 18:13:58.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:13:58 np0005596060 nova_compute[247421]: 2026-01-26 18:13:58.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:13:58 np0005596060 nova_compute[247421]: 2026-01-26 18:13:58.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:13:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:13:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:13:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:13:58.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:13:59 np0005596060 nova_compute[247421]: 2026-01-26 18:13:59.582 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:13:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:13:59.582 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:13:59 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:13:59.584 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:13:59 np0005596060 nova_compute[247421]: 2026-01-26 18:13:59.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:14:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:14:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:00.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:14:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 26 13:14:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:00 np0005596060 nova_compute[247421]: 2026-01-26 18:14:00.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:14:00 np0005596060 nova_compute[247421]: 2026-01-26 18:14:00.712 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:14:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:00.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:00 np0005596060 nova_compute[247421]: 2026-01-26 18:14:00.959 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:14:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537350821' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:14:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:14:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537350821' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:14:01 np0005596060 nova_compute[247421]: 2026-01-26 18:14:01.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:14:01 np0005596060 nova_compute[247421]: 2026-01-26 18:14:01.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:14:01 np0005596060 nova_compute[247421]: 2026-01-26 18:14:01.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:14:01 np0005596060 nova_compute[247421]: 2026-01-26 18:14:01.878 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:14:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:02.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 63 MiB data, 280 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 26 13:14:02 np0005596060 nova_compute[247421]: 2026-01-26 18:14:02.195 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:02 np0005596060 nova_compute[247421]: 2026-01-26 18:14:02.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:14:02 np0005596060 nova_compute[247421]: 2026-01-26 18:14:02.822 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:14:02 np0005596060 nova_compute[247421]: 2026-01-26 18:14:02.823 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:14:02 np0005596060 nova_compute[247421]: 2026-01-26 18:14:02.823 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:14:02 np0005596060 nova_compute[247421]: 2026-01-26 18:14:02.823 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:14:02 np0005596060 nova_compute[247421]: 2026-01-26 18:14:02.824 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:14:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:02.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:14:03 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306538135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:14:03 np0005596060 nova_compute[247421]: 2026-01-26 18:14:03.311 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:14:03 np0005596060 nova_compute[247421]: 2026-01-26 18:14:03.524 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:14:03 np0005596060 nova_compute[247421]: 2026-01-26 18:14:03.526 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4835MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:14:03 np0005596060 nova_compute[247421]: 2026-01-26 18:14:03.526 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:14:03 np0005596060 nova_compute[247421]: 2026-01-26 18:14:03.527 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:14:03 np0005596060 nova_compute[247421]: 2026-01-26 18:14:03.630 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:14:03 np0005596060 nova_compute[247421]: 2026-01-26 18:14:03.631 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:14:03 np0005596060 nova_compute[247421]: 2026-01-26 18:14:03.660 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00037568484785036293 of space, bias 1.0, pg target 0.11270545435510888 quantized to 32 (current 32)
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:14:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:14:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:14:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619891954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:14:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:04.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:04 np0005596060 nova_compute[247421]: 2026-01-26 18:14:04.093 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:14:04 np0005596060 nova_compute[247421]: 2026-01-26 18:14:04.099 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:14:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 26 KiB/s rd, 691 KiB/s wr, 40 op/s
Jan 26 13:14:04 np0005596060 nova_compute[247421]: 2026-01-26 18:14:04.130 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:14:04 np0005596060 nova_compute[247421]: 2026-01-26 18:14:04.132 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:14:04 np0005596060 nova_compute[247421]: 2026-01-26 18:14:04.132 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:14:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:04.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:05 np0005596060 nova_compute[247421]: 2026-01-26 18:14:05.133 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:14:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:14:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:14:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:14:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 26 13:14:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 13:14:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:14:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 26 13:14:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 13:14:06 np0005596060 nova_compute[247421]: 2026-01-26 18:14:06.009 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:06.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 41 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:14:06 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7a490db5-9936-4151-8b60-d1e4820b509b does not exist
Jan 26 13:14:06 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 090a7afc-2f55-4f44-8495-389bec27dfff does not exist
Jan 26 13:14:06 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 94c41cd5-1302-4ea0-9afc-e5c31e29e0d7 does not exist
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 13:14:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:14:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:14:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:06.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:14:07 np0005596060 podman[266085]: 2026-01-26 18:14:07.245343484 +0000 UTC m=+0.037458848 container create 66d9694cfe78a97394792c23db9a31681a448262f5aa82da29f6257581fbc009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hoover, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:14:07 np0005596060 nova_compute[247421]: 2026-01-26 18:14:07.250 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:07 np0005596060 systemd[1]: Started libpod-conmon-66d9694cfe78a97394792c23db9a31681a448262f5aa82da29f6257581fbc009.scope.
Jan 26 13:14:07 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:14:07 np0005596060 podman[266085]: 2026-01-26 18:14:07.229156809 +0000 UTC m=+0.021272193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:14:07 np0005596060 podman[266085]: 2026-01-26 18:14:07.350859443 +0000 UTC m=+0.142974827 container init 66d9694cfe78a97394792c23db9a31681a448262f5aa82da29f6257581fbc009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hoover, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:14:07 np0005596060 podman[266085]: 2026-01-26 18:14:07.360200317 +0000 UTC m=+0.152315721 container start 66d9694cfe78a97394792c23db9a31681a448262f5aa82da29f6257581fbc009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:14:07 np0005596060 podman[266085]: 2026-01-26 18:14:07.364646768 +0000 UTC m=+0.156762132 container attach 66d9694cfe78a97394792c23db9a31681a448262f5aa82da29f6257581fbc009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hoover, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:14:07 np0005596060 infallible_hoover[266101]: 167 167
Jan 26 13:14:07 np0005596060 systemd[1]: libpod-66d9694cfe78a97394792c23db9a31681a448262f5aa82da29f6257581fbc009.scope: Deactivated successfully.
Jan 26 13:14:07 np0005596060 conmon[266101]: conmon 66d9694cfe78a9739479 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66d9694cfe78a97394792c23db9a31681a448262f5aa82da29f6257581fbc009.scope/container/memory.events
Jan 26 13:14:07 np0005596060 podman[266085]: 2026-01-26 18:14:07.368360551 +0000 UTC m=+0.160475935 container died 66d9694cfe78a97394792c23db9a31681a448262f5aa82da29f6257581fbc009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:14:07 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d51640b5b39cf2c3eb6d9a99d72fac0ed3d3c99e28d41a9996cd85c7ede9f423-merged.mount: Deactivated successfully.
Jan 26 13:14:07 np0005596060 podman[266085]: 2026-01-26 18:14:07.408475135 +0000 UTC m=+0.200590509 container remove 66d9694cfe78a97394792c23db9a31681a448262f5aa82da29f6257581fbc009 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_hoover, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:14:07 np0005596060 systemd[1]: libpod-conmon-66d9694cfe78a97394792c23db9a31681a448262f5aa82da29f6257581fbc009.scope: Deactivated successfully.
Jan 26 13:14:07 np0005596060 podman[266124]: 2026-01-26 18:14:07.564614931 +0000 UTC m=+0.039332895 container create 5123784938c5ada7e383db0af1ec960d2a32674ee86919aaa890fe4cca778b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 26 13:14:07 np0005596060 systemd[1]: Started libpod-conmon-5123784938c5ada7e383db0af1ec960d2a32674ee86919aaa890fe4cca778b00.scope.
Jan 26 13:14:07 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:14:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd69861c9a5dea2f2f87842dcb2648915298556461710cdcfa14541db73b45e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd69861c9a5dea2f2f87842dcb2648915298556461710cdcfa14541db73b45e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd69861c9a5dea2f2f87842dcb2648915298556461710cdcfa14541db73b45e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd69861c9a5dea2f2f87842dcb2648915298556461710cdcfa14541db73b45e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fd69861c9a5dea2f2f87842dcb2648915298556461710cdcfa14541db73b45e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:07 np0005596060 podman[266124]: 2026-01-26 18:14:07.547644596 +0000 UTC m=+0.022362580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:14:07 np0005596060 podman[266124]: 2026-01-26 18:14:07.650309395 +0000 UTC m=+0.125027359 container init 5123784938c5ada7e383db0af1ec960d2a32674ee86919aaa890fe4cca778b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_johnson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:14:07 np0005596060 podman[266124]: 2026-01-26 18:14:07.660735545 +0000 UTC m=+0.135453509 container start 5123784938c5ada7e383db0af1ec960d2a32674ee86919aaa890fe4cca778b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_johnson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 26 13:14:07 np0005596060 podman[266124]: 2026-01-26 18:14:07.664313575 +0000 UTC m=+0.139031539 container attach 5123784938c5ada7e383db0af1ec960d2a32674ee86919aaa890fe4cca778b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_johnson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:14:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:14:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:14:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:08.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 26 13:14:08 np0005596060 recursing_johnson[266140]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:14:08 np0005596060 recursing_johnson[266140]: --> relative data size: 1.0
Jan 26 13:14:08 np0005596060 recursing_johnson[266140]: --> All data devices are unavailable
Jan 26 13:14:08 np0005596060 systemd[1]: libpod-5123784938c5ada7e383db0af1ec960d2a32674ee86919aaa890fe4cca778b00.scope: Deactivated successfully.
Jan 26 13:14:08 np0005596060 conmon[266140]: conmon 5123784938c5ada7e383 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5123784938c5ada7e383db0af1ec960d2a32674ee86919aaa890fe4cca778b00.scope/container/memory.events
Jan 26 13:14:08 np0005596060 podman[266124]: 2026-01-26 18:14:08.574554626 +0000 UTC m=+1.049272590 container died 5123784938c5ada7e383db0af1ec960d2a32674ee86919aaa890fe4cca778b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_johnson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 26 13:14:08 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:14:08.585 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:14:08 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4fd69861c9a5dea2f2f87842dcb2648915298556461710cdcfa14541db73b45e-merged.mount: Deactivated successfully.
Jan 26 13:14:08 np0005596060 podman[266124]: 2026-01-26 18:14:08.63946723 +0000 UTC m=+1.114185194 container remove 5123784938c5ada7e383db0af1ec960d2a32674ee86919aaa890fe4cca778b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_johnson, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:14:08 np0005596060 systemd[1]: libpod-conmon-5123784938c5ada7e383db0af1ec960d2a32674ee86919aaa890fe4cca778b00.scope: Deactivated successfully.
Jan 26 13:14:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:08.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:09 np0005596060 podman[266312]: 2026-01-26 18:14:09.280306312 +0000 UTC m=+0.037537760 container create 85413bbb1b8af504651ef1fec4a446de582c7a69d6dc45d893a69065991f1d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lewin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:14:09 np0005596060 systemd[1]: Started libpod-conmon-85413bbb1b8af504651ef1fec4a446de582c7a69d6dc45d893a69065991f1d1b.scope.
Jan 26 13:14:09 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:14:09 np0005596060 podman[266312]: 2026-01-26 18:14:09.35059278 +0000 UTC m=+0.107824258 container init 85413bbb1b8af504651ef1fec4a446de582c7a69d6dc45d893a69065991f1d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lewin, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:14:09 np0005596060 podman[266312]: 2026-01-26 18:14:09.358252562 +0000 UTC m=+0.115484010 container start 85413bbb1b8af504651ef1fec4a446de582c7a69d6dc45d893a69065991f1d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:14:09 np0005596060 podman[266312]: 2026-01-26 18:14:09.263551003 +0000 UTC m=+0.020782471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:14:09 np0005596060 podman[266312]: 2026-01-26 18:14:09.36297692 +0000 UTC m=+0.120208388 container attach 85413bbb1b8af504651ef1fec4a446de582c7a69d6dc45d893a69065991f1d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lewin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:14:09 np0005596060 fervent_lewin[266329]: 167 167
Jan 26 13:14:09 np0005596060 systemd[1]: libpod-85413bbb1b8af504651ef1fec4a446de582c7a69d6dc45d893a69065991f1d1b.scope: Deactivated successfully.
Jan 26 13:14:09 np0005596060 podman[266312]: 2026-01-26 18:14:09.364741424 +0000 UTC m=+0.121972872 container died 85413bbb1b8af504651ef1fec4a446de582c7a69d6dc45d893a69065991f1d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:14:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-685464f3f8e14fd5105fa1095f67372b97114d2dcbe1c7ba1d64c9ede2a02fe8-merged.mount: Deactivated successfully.
Jan 26 13:14:09 np0005596060 podman[266312]: 2026-01-26 18:14:09.397849233 +0000 UTC m=+0.155080671 container remove 85413bbb1b8af504651ef1fec4a446de582c7a69d6dc45d893a69065991f1d1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lewin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:14:09 np0005596060 systemd[1]: libpod-conmon-85413bbb1b8af504651ef1fec4a446de582c7a69d6dc45d893a69065991f1d1b.scope: Deactivated successfully.
Jan 26 13:14:09 np0005596060 podman[266352]: 2026-01-26 18:14:09.556257866 +0000 UTC m=+0.043043288 container create de98c799d6915ec4a029dee59ef350c16fffdc732c36cfcce30bcce706469aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 13:14:09 np0005596060 systemd[1]: Started libpod-conmon-de98c799d6915ec4a029dee59ef350c16fffdc732c36cfcce30bcce706469aee.scope.
Jan 26 13:14:09 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:14:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21baa721f06aa227caa0bcf316727ac74ce009956475f0b8320fe58ed04da343/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21baa721f06aa227caa0bcf316727ac74ce009956475f0b8320fe58ed04da343/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21baa721f06aa227caa0bcf316727ac74ce009956475f0b8320fe58ed04da343/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21baa721f06aa227caa0bcf316727ac74ce009956475f0b8320fe58ed04da343/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:09 np0005596060 podman[266352]: 2026-01-26 18:14:09.535458095 +0000 UTC m=+0.022243537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:14:09 np0005596060 podman[266352]: 2026-01-26 18:14:09.633621141 +0000 UTC m=+0.120406603 container init de98c799d6915ec4a029dee59ef350c16fffdc732c36cfcce30bcce706469aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bouman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 13:14:09 np0005596060 podman[266352]: 2026-01-26 18:14:09.646590345 +0000 UTC m=+0.133375767 container start de98c799d6915ec4a029dee59ef350c16fffdc732c36cfcce30bcce706469aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:14:09 np0005596060 podman[266352]: 2026-01-26 18:14:09.650204996 +0000 UTC m=+0.136990418 container attach de98c799d6915ec4a029dee59ef350c16fffdc732c36cfcce30bcce706469aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:14:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:10.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]: {
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:    "1": [
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:        {
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "devices": [
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "/dev/loop3"
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            ],
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "lv_name": "ceph_lv0",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "lv_size": "7511998464",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "name": "ceph_lv0",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "tags": {
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.cluster_name": "ceph",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.crush_device_class": "",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.encrypted": "0",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.osd_id": "1",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.type": "block",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:                "ceph.vdo": "0"
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            },
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "type": "block",
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:            "vg_name": "ceph_vg0"
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:        }
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]:    ]
Jan 26 13:14:10 np0005596060 xenodochial_bouman[266368]: }
Jan 26 13:14:10 np0005596060 systemd[1]: libpod-de98c799d6915ec4a029dee59ef350c16fffdc732c36cfcce30bcce706469aee.scope: Deactivated successfully.
Jan 26 13:14:10 np0005596060 podman[266352]: 2026-01-26 18:14:10.509709208 +0000 UTC m=+0.996494640 container died de98c799d6915ec4a029dee59ef350c16fffdc732c36cfcce30bcce706469aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bouman, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:14:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:10.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:11 np0005596060 nova_compute[247421]: 2026-01-26 18:14:11.053 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-21baa721f06aa227caa0bcf316727ac74ce009956475f0b8320fe58ed04da343-merged.mount: Deactivated successfully.
Jan 26 13:14:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:12.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 26 13:14:12 np0005596060 nova_compute[247421]: 2026-01-26 18:14:12.303 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:12 np0005596060 podman[266352]: 2026-01-26 18:14:12.318388495 +0000 UTC m=+2.805173917 container remove de98c799d6915ec4a029dee59ef350c16fffdc732c36cfcce30bcce706469aee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 13:14:12 np0005596060 systemd[1]: libpod-conmon-de98c799d6915ec4a029dee59ef350c16fffdc732c36cfcce30bcce706469aee.scope: Deactivated successfully.
Jan 26 13:14:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:12.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:12 np0005596060 podman[266579]: 2026-01-26 18:14:12.96854419 +0000 UTC m=+0.049753865 container create c93433c58c56bdbc2d95c3b7c4a5960618d962a753483ea09b6701816acd1f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_yalow, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 26 13:14:13 np0005596060 systemd[1]: Started libpod-conmon-c93433c58c56bdbc2d95c3b7c4a5960618d962a753483ea09b6701816acd1f7a.scope.
Jan 26 13:14:13 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:14:13 np0005596060 podman[266579]: 2026-01-26 18:14:12.944924659 +0000 UTC m=+0.026134324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:14:13 np0005596060 podman[266579]: 2026-01-26 18:14:13.054580303 +0000 UTC m=+0.135789958 container init c93433c58c56bdbc2d95c3b7c4a5960618d962a753483ea09b6701816acd1f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 13:14:13 np0005596060 podman[266579]: 2026-01-26 18:14:13.065792693 +0000 UTC m=+0.147002328 container start c93433c58c56bdbc2d95c3b7c4a5960618d962a753483ea09b6701816acd1f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_yalow, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:14:13 np0005596060 podman[266579]: 2026-01-26 18:14:13.069408604 +0000 UTC m=+0.150618239 container attach c93433c58c56bdbc2d95c3b7c4a5960618d962a753483ea09b6701816acd1f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_yalow, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 13:14:13 np0005596060 admiring_yalow[266595]: 167 167
Jan 26 13:14:13 np0005596060 systemd[1]: libpod-c93433c58c56bdbc2d95c3b7c4a5960618d962a753483ea09b6701816acd1f7a.scope: Deactivated successfully.
Jan 26 13:14:13 np0005596060 podman[266579]: 2026-01-26 18:14:13.073012944 +0000 UTC m=+0.154222619 container died c93433c58c56bdbc2d95c3b7c4a5960618d962a753483ea09b6701816acd1f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_yalow, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 13:14:13 np0005596060 systemd[1]: var-lib-containers-storage-overlay-37464cb468df8378e434a3a4e02d7335362c14b498a907c21972975c4248da70-merged.mount: Deactivated successfully.
Jan 26 13:14:13 np0005596060 podman[266579]: 2026-01-26 18:14:13.114946523 +0000 UTC m=+0.196156158 container remove c93433c58c56bdbc2d95c3b7c4a5960618d962a753483ea09b6701816acd1f7a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_yalow, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:14:13 np0005596060 systemd[1]: libpod-conmon-c93433c58c56bdbc2d95c3b7c4a5960618d962a753483ea09b6701816acd1f7a.scope: Deactivated successfully.
Jan 26 13:14:13 np0005596060 podman[266617]: 2026-01-26 18:14:13.301631503 +0000 UTC m=+0.052200817 container create 8afbbd2ac92b83535bf9331073c87607f0bf6a572594893fd83798ad54669cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 13:14:13 np0005596060 systemd[1]: Started libpod-conmon-8afbbd2ac92b83535bf9331073c87607f0bf6a572594893fd83798ad54669cb9.scope.
Jan 26 13:14:13 np0005596060 podman[266617]: 2026-01-26 18:14:13.275068299 +0000 UTC m=+0.025637633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:14:13 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:14:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaea70adea2b1ed21edc0db25f69006b58724db53667e8f300b17a490459b29b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaea70adea2b1ed21edc0db25f69006b58724db53667e8f300b17a490459b29b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaea70adea2b1ed21edc0db25f69006b58724db53667e8f300b17a490459b29b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaea70adea2b1ed21edc0db25f69006b58724db53667e8f300b17a490459b29b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:14:13 np0005596060 podman[266617]: 2026-01-26 18:14:13.387581103 +0000 UTC m=+0.138150417 container init 8afbbd2ac92b83535bf9331073c87607f0bf6a572594893fd83798ad54669cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 13:14:13 np0005596060 podman[266617]: 2026-01-26 18:14:13.398520887 +0000 UTC m=+0.149090201 container start 8afbbd2ac92b83535bf9331073c87607f0bf6a572594893fd83798ad54669cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:14:13 np0005596060 podman[266617]: 2026-01-26 18:14:13.402008994 +0000 UTC m=+0.152578338 container attach 8afbbd2ac92b83535bf9331073c87607f0bf6a572594893fd83798ad54669cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 26 13:14:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:14:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:14:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:14:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:14:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:14:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:14:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:14.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 3 op/s
Jan 26 13:14:14 np0005596060 condescending_chebyshev[266633]: {
Jan 26 13:14:14 np0005596060 condescending_chebyshev[266633]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:14:14 np0005596060 condescending_chebyshev[266633]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:14:14 np0005596060 condescending_chebyshev[266633]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:14:14 np0005596060 condescending_chebyshev[266633]:        "osd_id": 1,
Jan 26 13:14:14 np0005596060 condescending_chebyshev[266633]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:14:14 np0005596060 condescending_chebyshev[266633]:        "type": "bluestore"
Jan 26 13:14:14 np0005596060 condescending_chebyshev[266633]:    }
Jan 26 13:14:14 np0005596060 condescending_chebyshev[266633]: }
Jan 26 13:14:14 np0005596060 systemd[1]: libpod-8afbbd2ac92b83535bf9331073c87607f0bf6a572594893fd83798ad54669cb9.scope: Deactivated successfully.
Jan 26 13:14:14 np0005596060 podman[266617]: 2026-01-26 18:14:14.274639804 +0000 UTC m=+1.025209118 container died 8afbbd2ac92b83535bf9331073c87607f0bf6a572594893fd83798ad54669cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:14:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:14:14.744 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:14:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:14:14.745 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:14:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:14:14.745 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:14:14 np0005596060 systemd[1]: var-lib-containers-storage-overlay-aaea70adea2b1ed21edc0db25f69006b58724db53667e8f300b17a490459b29b-merged.mount: Deactivated successfully.
Jan 26 13:14:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:14.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:15 np0005596060 podman[266617]: 2026-01-26 18:14:15.330641521 +0000 UTC m=+2.081210835 container remove 8afbbd2ac92b83535bf9331073c87607f0bf6a572594893fd83798ad54669cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 13:14:15 np0005596060 systemd[1]: libpod-conmon-8afbbd2ac92b83535bf9331073c87607f0bf6a572594893fd83798ad54669cb9.scope: Deactivated successfully.
Jan 26 13:14:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:14:15 np0005596060 podman[266655]: 2026-01-26 18:14:15.396485149 +0000 UTC m=+1.090947903 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 13:14:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:14:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:14:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:14:15 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 482e638a-ed5c-4ec1-a875-b9fef22ba55e does not exist
Jan 26 13:14:15 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8337e1cb-c846-4ee2-81c8-26ce044c308c does not exist
Jan 26 13:14:15 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5b63fb83-0791-4c0d-9276-9de3b17066fc does not exist
Jan 26 13:14:15 np0005596060 podman[266658]: 2026-01-26 18:14:15.436399607 +0000 UTC m=+1.129928748 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:14:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:16 np0005596060 nova_compute[247421]: 2026-01-26 18:14:16.056 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:16.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:14:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:14:16 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:14:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:16.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:17 np0005596060 nova_compute[247421]: 2026-01-26 18:14:17.304 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:18.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:14:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:18.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:14:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:20.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:14:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:14:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:20.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:21 np0005596060 nova_compute[247421]: 2026-01-26 18:14:21.058 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:22.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:14:22 np0005596060 nova_compute[247421]: 2026-01-26 18:14:22.307 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:22.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:24.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:24 np0005596060 ceph-osd[84834]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 26 13:14:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:14:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:24.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:26 np0005596060 nova_compute[247421]: 2026-01-26 18:14:26.060 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:26.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:14:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:14:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:26.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:14:27 np0005596060 nova_compute[247421]: 2026-01-26 18:14:27.310 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:28.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 13:14:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:28.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:30.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 13:14:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:30.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:31 np0005596060 nova_compute[247421]: 2026-01-26 18:14:31.071 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:32.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 13:14:32 np0005596060 nova_compute[247421]: 2026-01-26 18:14:32.313 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:14:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:32.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:14:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:34.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 13:14:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:14:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:34.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:14:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:36 np0005596060 nova_compute[247421]: 2026-01-26 18:14:36.105 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:36.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 13:14:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:36.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:37 np0005596060 nova_compute[247421]: 2026-01-26 18:14:37.315 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:38.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 26 13:14:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:38.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:40.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 12 KiB/s wr, 58 op/s
Jan 26 13:14:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:40.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:41 np0005596060 nova_compute[247421]: 2026-01-26 18:14:41.107 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:42.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:14:42 np0005596060 nova_compute[247421]: 2026-01-26 18:14:42.318 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:42.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:14:44
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'vms', 'images', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr']
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:14:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:44.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:14:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:14:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:45 np0005596060 podman[266828]: 2026-01-26 18:14:45.805542284 +0000 UTC m=+0.067192072 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 13:14:45 np0005596060 podman[266829]: 2026-01-26 18:14:45.848222902 +0000 UTC m=+0.107482840 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 13:14:46 np0005596060 nova_compute[247421]: 2026-01-26 18:14:46.109 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:46.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:14:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:14:47.121 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:14:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:14:47.121 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:14:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:47.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:47 np0005596060 nova_compute[247421]: 2026-01-26 18:14:47.122 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:47 np0005596060 nova_compute[247421]: 2026-01-26 18:14:47.319 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:48.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:14:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:49.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:50.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 478 KiB/s rd, 15 op/s
Jan 26 13:14:51 np0005596060 nova_compute[247421]: 2026-01-26 18:14:51.111 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:51.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:52.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 100 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 597 KiB/s rd, 1.1 MiB/s wr, 50 op/s
Jan 26 13:14:52 np0005596060 nova_compute[247421]: 2026-01-26 18:14:52.321 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:14:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:53.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:14:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:54.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 109 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 129 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Jan 26 13:14:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:55.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:55.229644) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451295229686, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2420, "num_deletes": 504, "total_data_size": 4155040, "memory_usage": 4246464, "flush_reason": "Manual Compaction"}
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451295406346, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 2553748, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24381, "largest_seqno": 26799, "table_properties": {"data_size": 2545594, "index_size": 4200, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 23219, "raw_average_key_size": 20, "raw_value_size": 2525926, "raw_average_value_size": 2206, "num_data_blocks": 187, "num_entries": 1145, "num_filter_entries": 1145, "num_deletions": 504, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769451085, "oldest_key_time": 1769451085, "file_creation_time": 1769451295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 176764 microseconds, and 6700 cpu microseconds.
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:55.406401) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 2553748 bytes OK
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:55.406424) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:55.538138) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:55.538242) EVENT_LOG_v1 {"time_micros": 1769451295538229, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:55.538267) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4144160, prev total WAL file size 4144160, number of live WAL files 2.
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:55.539594) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(2493KB)], [56(9340KB)]
Jan 26 13:14:55 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451295539632, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12118919, "oldest_snapshot_seqno": -1}
Jan 26 13:14:55 np0005596060 nova_compute[247421]: 2026-01-26 18:14:55.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:14:55 np0005596060 nova_compute[247421]: 2026-01-26 18:14:55.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:14:56 np0005596060 nova_compute[247421]: 2026-01-26 18:14:56.113 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:56.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 109 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 129 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5201 keys, 7747915 bytes, temperature: kUnknown
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451296228338, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7747915, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7714540, "index_size": 19249, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 131706, "raw_average_key_size": 25, "raw_value_size": 7622063, "raw_average_value_size": 1465, "num_data_blocks": 780, "num_entries": 5201, "num_filter_entries": 5201, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769451295, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:56.228716) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7747915 bytes
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:56.768278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 17.6 rd, 11.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.8) write-amplify(3.0) OK, records in: 6137, records dropped: 936 output_compression: NoCompression
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:56.768334) EVENT_LOG_v1 {"time_micros": 1769451296768309, "job": 30, "event": "compaction_finished", "compaction_time_micros": 688868, "compaction_time_cpu_micros": 21174, "output_level": 6, "num_output_files": 1, "total_output_size": 7747915, "num_input_records": 6137, "num_output_records": 5201, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451296769742, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451296772716, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:55.539461) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:56.772792) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:56.772797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:56.772799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:56.772801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:14:56 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:14:56.772802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:14:57 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:14:57.124 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:14:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:57.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:57 np0005596060 nova_compute[247421]: 2026-01-26 18:14:57.324 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:14:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:14:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:14:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:14:58.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:14:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 121 MiB data, 314 MiB used, 21 GiB / 21 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 26 13:14:58 np0005596060 nova_compute[247421]: 2026-01-26 18:14:58.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:14:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:14:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:14:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:14:59.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:14:59 np0005596060 nova_compute[247421]: 2026-01-26 18:14:59.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:14:59 np0005596060 nova_compute[247421]: 2026-01-26 18:14:59.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:15:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:00.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 121 MiB data, 314 MiB used, 21 GiB / 21 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 26 13:15:00 np0005596060 nova_compute[247421]: 2026-01-26 18:15:00.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:15:00 np0005596060 nova_compute[247421]: 2026-01-26 18:15:00.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:15:01 np0005596060 nova_compute[247421]: 2026-01-26 18:15:01.116 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:01.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:02.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 121 MiB data, 314 MiB used, 21 GiB / 21 GiB avail; 385 KiB/s rd, 2.1 MiB/s wr, 86 op/s
Jan 26 13:15:02 np0005596060 nova_compute[247421]: 2026-01-26 18:15:02.326 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:15:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4147183692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:15:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:15:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4147183692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:15:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:03.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:03 np0005596060 nova_compute[247421]: 2026-01-26 18:15:03.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:15:03 np0005596060 nova_compute[247421]: 2026-01-26 18:15:03.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:15:03 np0005596060 nova_compute[247421]: 2026-01-26 18:15:03.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002161051205099572 of space, bias 1.0, pg target 0.6483153615298716 quantized to 32 (current 32)
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:15:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:15:03 np0005596060 nova_compute[247421]: 2026-01-26 18:15:03.952 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:15:03 np0005596060 nova_compute[247421]: 2026-01-26 18:15:03.952 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.046 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.047 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.047 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.047 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.048 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:15:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:04.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 121 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 266 KiB/s rd, 1013 KiB/s wr, 50 op/s
Jan 26 13:15:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:15:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2381242933' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.574 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.793 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.794 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4836MB free_disk=20.942806243896484GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.794 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.795 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.911 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.912 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:15:04 np0005596060 nova_compute[247421]: 2026-01-26 18:15:04.957 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:15:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:05.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:15:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428863592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:15:05 np0005596060 nova_compute[247421]: 2026-01-26 18:15:05.395 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:15:05 np0005596060 nova_compute[247421]: 2026-01-26 18:15:05.399 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:15:05 np0005596060 nova_compute[247421]: 2026-01-26 18:15:05.580 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:15:05 np0005596060 nova_compute[247421]: 2026-01-26 18:15:05.581 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:15:05 np0005596060 nova_compute[247421]: 2026-01-26 18:15:05.581 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:15:06 np0005596060 nova_compute[247421]: 2026-01-26 18:15:06.118 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:06.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 121 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 256 KiB/s rd, 102 KiB/s wr, 43 op/s
Jan 26 13:15:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:07.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:07 np0005596060 nova_compute[247421]: 2026-01-26 18:15:07.280 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:15:07 np0005596060 nova_compute[247421]: 2026-01-26 18:15:07.328 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:08.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 121 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 256 KiB/s rd, 106 KiB/s wr, 44 op/s
Jan 26 13:15:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:10.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 121 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s
Jan 26 13:15:11 np0005596060 nova_compute[247421]: 2026-01-26 18:15:11.120 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:12.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 121 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 15 KiB/s wr, 0 op/s
Jan 26 13:15:12 np0005596060 nova_compute[247421]: 2026-01-26 18:15:12.330 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:15:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:13.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:15:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:15:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:15:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:15:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:15:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:15:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:15:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:14.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 121 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 14 KiB/s wr, 0 op/s
Jan 26 13:15:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:15:14.745 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:15:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:15:14.746 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:15:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:15:14.746 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:15:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:15.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:16 np0005596060 podman[267056]: 2026-01-26 18:15:16.064646177 +0000 UTC m=+0.108979058 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:15:16 np0005596060 podman[267057]: 2026-01-26 18:15:16.093612155 +0000 UTC m=+0.133550256 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 13:15:16 np0005596060 nova_compute[247421]: 2026-01-26 18:15:16.123 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:16.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 121 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 3.7 KiB/s wr, 0 op/s
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:15:16 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 0d1f3cfc-5475-45ae-92fd-d6e7faef0b45 does not exist
Jan 26 13:15:16 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 95e99a47-6d60-46bf-a90a-8deb145c4369 does not exist
Jan 26 13:15:16 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8bb243e6-32c5-43fd-bb38-6ade5559e52c does not exist
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:15:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:15:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:17.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:17 np0005596060 nova_compute[247421]: 2026-01-26 18:15:17.384 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:17 np0005596060 podman[267349]: 2026-01-26 18:15:17.566452434 +0000 UTC m=+0.123424901 container create 78c2d7d6507dfd90f14596bdd2dc262511273f87437898c9af42db1472cd6482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_leavitt, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:15:17 np0005596060 podman[267349]: 2026-01-26 18:15:17.484211178 +0000 UTC m=+0.041183725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:17 np0005596060 systemd[1]: Started libpod-conmon-78c2d7d6507dfd90f14596bdd2dc262511273f87437898c9af42db1472cd6482.scope.
Jan 26 13:15:17 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:17.848706) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451317848744, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 461, "num_deletes": 258, "total_data_size": 386743, "memory_usage": 395976, "flush_reason": "Manual Compaction"}
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451317975101, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 383027, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26800, "largest_seqno": 27260, "table_properties": {"data_size": 380410, "index_size": 653, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6128, "raw_average_key_size": 17, "raw_value_size": 375063, "raw_average_value_size": 1080, "num_data_blocks": 29, "num_entries": 347, "num_filter_entries": 347, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769451296, "oldest_key_time": 1769451296, "file_creation_time": 1769451317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 126523 microseconds, and 2478 cpu microseconds.
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:15:17 np0005596060 podman[267349]: 2026-01-26 18:15:17.986767083 +0000 UTC m=+0.543739590 container init 78c2d7d6507dfd90f14596bdd2dc262511273f87437898c9af42db1472cd6482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:17.975165) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 383027 bytes OK
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:17.975248) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:17.993509) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:17.993567) EVENT_LOG_v1 {"time_micros": 1769451317993556, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:17.993592) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 383954, prev total WAL file size 383954, number of live WAL files 2.
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:17.994824) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353032' seq:72057594037927935, type:22 .. '6C6F676D00373536' seq:0, type:0; will stop at (end)
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(374KB)], [59(7566KB)]
Jan 26 13:15:17 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451317994938, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 8130942, "oldest_snapshot_seqno": -1}
Jan 26 13:15:17 np0005596060 podman[267349]: 2026-01-26 18:15:17.997239246 +0000 UTC m=+0.554211723 container start 78c2d7d6507dfd90f14596bdd2dc262511273f87437898c9af42db1472cd6482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:15:18 np0005596060 dreamy_leavitt[267365]: 167 167
Jan 26 13:15:18 np0005596060 systemd[1]: libpod-78c2d7d6507dfd90f14596bdd2dc262511273f87437898c9af42db1472cd6482.scope: Deactivated successfully.
Jan 26 13:15:18 np0005596060 podman[267349]: 2026-01-26 18:15:18.004506288 +0000 UTC m=+0.561478785 container attach 78c2d7d6507dfd90f14596bdd2dc262511273f87437898c9af42db1472cd6482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_leavitt, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 13:15:18 np0005596060 podman[267349]: 2026-01-26 18:15:18.005818971 +0000 UTC m=+0.562791478 container died 78c2d7d6507dfd90f14596bdd2dc262511273f87437898c9af42db1472cd6482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_leavitt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:15:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2cdfa4c49535667d6c7fd35a02254ced9be33eacdcf0bc6462656294e1db2d2c-merged.mount: Deactivated successfully.
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5022 keys, 8041625 bytes, temperature: kUnknown
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451318075439, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8041625, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8008442, "index_size": 19505, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 129235, "raw_average_key_size": 25, "raw_value_size": 7918124, "raw_average_value_size": 1576, "num_data_blocks": 788, "num_entries": 5022, "num_filter_entries": 5022, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769451317, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:18.076074) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8041625 bytes
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:18.077784) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.4 rd, 99.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 7.4 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(42.2) write-amplify(21.0) OK, records in: 5548, records dropped: 526 output_compression: NoCompression
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:18.077805) EVENT_LOG_v1 {"time_micros": 1769451318077795, "job": 32, "event": "compaction_finished", "compaction_time_micros": 80951, "compaction_time_cpu_micros": 46180, "output_level": 6, "num_output_files": 1, "total_output_size": 8041625, "num_input_records": 5548, "num_output_records": 5022, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451318077983, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451318079873, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:17.994034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:18.080049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:18.080059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:18.080062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:18.080064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:15:18 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:15:18.080066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:15:18 np0005596060 podman[267349]: 2026-01-26 18:15:18.089459132 +0000 UTC m=+0.646431589 container remove 78c2d7d6507dfd90f14596bdd2dc262511273f87437898c9af42db1472cd6482 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:15:18 np0005596060 systemd[1]: libpod-conmon-78c2d7d6507dfd90f14596bdd2dc262511273f87437898c9af42db1472cd6482.scope: Deactivated successfully.
Jan 26 13:15:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:18.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 121 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.9 KiB/s wr, 7 op/s
Jan 26 13:15:18 np0005596060 podman[267392]: 2026-01-26 18:15:18.254199541 +0000 UTC m=+0.020824554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:15:18 np0005596060 podman[267392]: 2026-01-26 18:15:18.369310613 +0000 UTC m=+0.135935606 container create 21370a5149e05da8d145e32fd0b15842f22be42b3ab26458860578fc081e7928 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 13:15:18 np0005596060 systemd[1]: Started libpod-conmon-21370a5149e05da8d145e32fd0b15842f22be42b3ab26458860578fc081e7928.scope.
Jan 26 13:15:18 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:15:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ed9754e40face98173c5e42b00ba830386cbcc89a266218b3f67f4411e6699/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ed9754e40face98173c5e42b00ba830386cbcc89a266218b3f67f4411e6699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ed9754e40face98173c5e42b00ba830386cbcc89a266218b3f67f4411e6699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ed9754e40face98173c5e42b00ba830386cbcc89a266218b3f67f4411e6699/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ed9754e40face98173c5e42b00ba830386cbcc89a266218b3f67f4411e6699/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:18 np0005596060 podman[267392]: 2026-01-26 18:15:18.463572041 +0000 UTC m=+0.230197054 container init 21370a5149e05da8d145e32fd0b15842f22be42b3ab26458860578fc081e7928 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 13:15:18 np0005596060 podman[267392]: 2026-01-26 18:15:18.4739046 +0000 UTC m=+0.240529613 container start 21370a5149e05da8d145e32fd0b15842f22be42b3ab26458860578fc081e7928 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:15:18 np0005596060 podman[267392]: 2026-01-26 18:15:18.477503921 +0000 UTC m=+0.244128914 container attach 21370a5149e05da8d145e32fd0b15842f22be42b3ab26458860578fc081e7928 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chandrasekhar, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 13:15:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:19.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:19 np0005596060 relaxed_chandrasekhar[267408]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:15:19 np0005596060 relaxed_chandrasekhar[267408]: --> relative data size: 1.0
Jan 26 13:15:19 np0005596060 relaxed_chandrasekhar[267408]: --> All data devices are unavailable
Jan 26 13:15:19 np0005596060 systemd[1]: libpod-21370a5149e05da8d145e32fd0b15842f22be42b3ab26458860578fc081e7928.scope: Deactivated successfully.
Jan 26 13:15:19 np0005596060 podman[267392]: 2026-01-26 18:15:19.281729673 +0000 UTC m=+1.048354666 container died 21370a5149e05da8d145e32fd0b15842f22be42b3ab26458860578fc081e7928 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chandrasekhar, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 26 13:15:19 np0005596060 systemd[1]: var-lib-containers-storage-overlay-66ed9754e40face98173c5e42b00ba830386cbcc89a266218b3f67f4411e6699-merged.mount: Deactivated successfully.
Jan 26 13:15:19 np0005596060 podman[267392]: 2026-01-26 18:15:19.572423465 +0000 UTC m=+1.339048458 container remove 21370a5149e05da8d145e32fd0b15842f22be42b3ab26458860578fc081e7928 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 13:15:19 np0005596060 systemd[1]: libpod-conmon-21370a5149e05da8d145e32fd0b15842f22be42b3ab26458860578fc081e7928.scope: Deactivated successfully.
Jan 26 13:15:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:20.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 121 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 7 op/s
Jan 26 13:15:20 np0005596060 podman[267576]: 2026-01-26 18:15:20.209793266 +0000 UTC m=+0.024163938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:15:20 np0005596060 podman[267576]: 2026-01-26 18:15:20.483887892 +0000 UTC m=+0.298258544 container create 8127517f669974aa02fb9e6d1340629774e30254b17e44aa75f6a9bb8e4f9a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:15:20 np0005596060 systemd[1]: Started libpod-conmon-8127517f669974aa02fb9e6d1340629774e30254b17e44aa75f6a9bb8e4f9a10.scope.
Jan 26 13:15:20 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:15:20 np0005596060 podman[267576]: 2026-01-26 18:15:20.589060764 +0000 UTC m=+0.403431446 container init 8127517f669974aa02fb9e6d1340629774e30254b17e44aa75f6a9bb8e4f9a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 26 13:15:20 np0005596060 podman[267576]: 2026-01-26 18:15:20.598792418 +0000 UTC m=+0.413163040 container start 8127517f669974aa02fb9e6d1340629774e30254b17e44aa75f6a9bb8e4f9a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 26 13:15:20 np0005596060 podman[267576]: 2026-01-26 18:15:20.60283084 +0000 UTC m=+0.417201492 container attach 8127517f669974aa02fb9e6d1340629774e30254b17e44aa75f6a9bb8e4f9a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:15:20 np0005596060 fervent_mclaren[267593]: 167 167
Jan 26 13:15:20 np0005596060 systemd[1]: libpod-8127517f669974aa02fb9e6d1340629774e30254b17e44aa75f6a9bb8e4f9a10.scope: Deactivated successfully.
Jan 26 13:15:20 np0005596060 podman[267576]: 2026-01-26 18:15:20.608101532 +0000 UTC m=+0.422472164 container died 8127517f669974aa02fb9e6d1340629774e30254b17e44aa75f6a9bb8e4f9a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 13:15:20 np0005596060 systemd[1]: var-lib-containers-storage-overlay-5c5a4a6aa5038fc076581103aede924ae54ada1ef563e2154476b8af2b5641f8-merged.mount: Deactivated successfully.
Jan 26 13:15:20 np0005596060 podman[267576]: 2026-01-26 18:15:20.659048922 +0000 UTC m=+0.473419544 container remove 8127517f669974aa02fb9e6d1340629774e30254b17e44aa75f6a9bb8e4f9a10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclaren, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:15:20 np0005596060 systemd[1]: libpod-conmon-8127517f669974aa02fb9e6d1340629774e30254b17e44aa75f6a9bb8e4f9a10.scope: Deactivated successfully.
Jan 26 13:15:20 np0005596060 podman[267615]: 2026-01-26 18:15:20.81381586 +0000 UTC m=+0.039556945 container create 13171d268e973faff0968d983d6b794dc10c54cd48e73f4d4e92758dd4c3aa0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:15:20 np0005596060 systemd[1]: Started libpod-conmon-13171d268e973faff0968d983d6b794dc10c54cd48e73f4d4e92758dd4c3aa0f.scope.
Jan 26 13:15:20 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:15:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d6468f56851a7719a537be8b5b8d5541875866519b469ee9b5456a1b717136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d6468f56851a7719a537be8b5b8d5541875866519b469ee9b5456a1b717136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d6468f56851a7719a537be8b5b8d5541875866519b469ee9b5456a1b717136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d6468f56851a7719a537be8b5b8d5541875866519b469ee9b5456a1b717136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:20 np0005596060 podman[267615]: 2026-01-26 18:15:20.891014209 +0000 UTC m=+0.116755364 container init 13171d268e973faff0968d983d6b794dc10c54cd48e73f4d4e92758dd4c3aa0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:15:20 np0005596060 podman[267615]: 2026-01-26 18:15:20.796468064 +0000 UTC m=+0.022209179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:15:20 np0005596060 podman[267615]: 2026-01-26 18:15:20.903948344 +0000 UTC m=+0.129689459 container start 13171d268e973faff0968d983d6b794dc10c54cd48e73f4d4e92758dd4c3aa0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 13:15:20 np0005596060 podman[267615]: 2026-01-26 18:15:20.907235937 +0000 UTC m=+0.132977042 container attach 13171d268e973faff0968d983d6b794dc10c54cd48e73f4d4e92758dd4c3aa0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:15:21 np0005596060 nova_compute[247421]: 2026-01-26 18:15:21.126 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:21.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]: {
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:    "1": [
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:        {
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "devices": [
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "/dev/loop3"
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            ],
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "lv_name": "ceph_lv0",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "lv_size": "7511998464",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "name": "ceph_lv0",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "tags": {
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.cluster_name": "ceph",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.crush_device_class": "",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.encrypted": "0",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.osd_id": "1",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.type": "block",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:                "ceph.vdo": "0"
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            },
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "type": "block",
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:            "vg_name": "ceph_vg0"
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:        }
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]:    ]
Jan 26 13:15:21 np0005596060 dazzling_brahmagupta[267631]: }
Jan 26 13:15:21 np0005596060 systemd[1]: libpod-13171d268e973faff0968d983d6b794dc10c54cd48e73f4d4e92758dd4c3aa0f.scope: Deactivated successfully.
Jan 26 13:15:21 np0005596060 podman[267615]: 2026-01-26 18:15:21.62190322 +0000 UTC m=+0.847644315 container died 13171d268e973faff0968d983d6b794dc10c54cd48e73f4d4e92758dd4c3aa0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:15:21 np0005596060 systemd[1]: var-lib-containers-storage-overlay-55d6468f56851a7719a537be8b5b8d5541875866519b469ee9b5456a1b717136-merged.mount: Deactivated successfully.
Jan 26 13:15:21 np0005596060 podman[267615]: 2026-01-26 18:15:21.876569957 +0000 UTC m=+1.102311042 container remove 13171d268e973faff0968d983d6b794dc10c54cd48e73f4d4e92758dd4c3aa0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 13:15:21 np0005596060 systemd[1]: libpod-conmon-13171d268e973faff0968d983d6b794dc10c54cd48e73f4d4e92758dd4c3aa0f.scope: Deactivated successfully.
Jan 26 13:15:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:22.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 113 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 45 op/s
Jan 26 13:15:22 np0005596060 nova_compute[247421]: 2026-01-26 18:15:22.386 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:22 np0005596060 podman[267795]: 2026-01-26 18:15:22.682366478 +0000 UTC m=+0.039761469 container create f92d6d3a673ec54a88ce5a8e930263d3d6301fc80b227c805df0e94903766793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:15:22 np0005596060 systemd[1]: Started libpod-conmon-f92d6d3a673ec54a88ce5a8e930263d3d6301fc80b227c805df0e94903766793.scope.
Jan 26 13:15:22 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:15:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:22 np0005596060 podman[267795]: 2026-01-26 18:15:22.663225327 +0000 UTC m=+0.020620348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:15:22 np0005596060 podman[267795]: 2026-01-26 18:15:22.760738056 +0000 UTC m=+0.118133077 container init f92d6d3a673ec54a88ce5a8e930263d3d6301fc80b227c805df0e94903766793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:15:22 np0005596060 podman[267795]: 2026-01-26 18:15:22.768605644 +0000 UTC m=+0.126000635 container start f92d6d3a673ec54a88ce5a8e930263d3d6301fc80b227c805df0e94903766793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 13:15:22 np0005596060 podman[267795]: 2026-01-26 18:15:22.772098032 +0000 UTC m=+0.129493023 container attach f92d6d3a673ec54a88ce5a8e930263d3d6301fc80b227c805df0e94903766793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:15:22 np0005596060 strange_merkle[267811]: 167 167
Jan 26 13:15:22 np0005596060 systemd[1]: libpod-f92d6d3a673ec54a88ce5a8e930263d3d6301fc80b227c805df0e94903766793.scope: Deactivated successfully.
Jan 26 13:15:22 np0005596060 podman[267795]: 2026-01-26 18:15:22.77563341 +0000 UTC m=+0.133028401 container died f92d6d3a673ec54a88ce5a8e930263d3d6301fc80b227c805df0e94903766793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 13:15:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0d169732a0be1a9e0b7921de7c467a7ab671e618077fb1060f3648a2ba98e52e-merged.mount: Deactivated successfully.
Jan 26 13:15:22 np0005596060 podman[267795]: 2026-01-26 18:15:22.812217299 +0000 UTC m=+0.169612290 container remove f92d6d3a673ec54a88ce5a8e930263d3d6301fc80b227c805df0e94903766793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_merkle, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 26 13:15:22 np0005596060 systemd[1]: libpod-conmon-f92d6d3a673ec54a88ce5a8e930263d3d6301fc80b227c805df0e94903766793.scope: Deactivated successfully.
Jan 26 13:15:23 np0005596060 podman[267835]: 2026-01-26 18:15:23.005710589 +0000 UTC m=+0.047234717 container create b885a6fe0f6ae3f047b61083b465cbd0bb2dd8be4b65c6f02c647fac7f744e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 13:15:23 np0005596060 systemd[1]: Started libpod-conmon-b885a6fe0f6ae3f047b61083b465cbd0bb2dd8be4b65c6f02c647fac7f744e24.scope.
Jan 26 13:15:23 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:15:23 np0005596060 podman[267835]: 2026-01-26 18:15:22.98700395 +0000 UTC m=+0.028528098 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:15:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a81e7cd2097861f160be4e611292150df2195958a0500b9a3b0ceb64424a218/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a81e7cd2097861f160be4e611292150df2195958a0500b9a3b0ceb64424a218/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a81e7cd2097861f160be4e611292150df2195958a0500b9a3b0ceb64424a218/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a81e7cd2097861f160be4e611292150df2195958a0500b9a3b0ceb64424a218/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:15:23 np0005596060 podman[267835]: 2026-01-26 18:15:23.099353092 +0000 UTC m=+0.140877220 container init b885a6fe0f6ae3f047b61083b465cbd0bb2dd8be4b65c6f02c647fac7f744e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 26 13:15:23 np0005596060 podman[267835]: 2026-01-26 18:15:23.107270891 +0000 UTC m=+0.148795019 container start b885a6fe0f6ae3f047b61083b465cbd0bb2dd8be4b65c6f02c647fac7f744e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pascal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 13:15:23 np0005596060 podman[267835]: 2026-01-26 18:15:23.111024595 +0000 UTC m=+0.152548803 container attach b885a6fe0f6ae3f047b61083b465cbd0bb2dd8be4b65c6f02c647fac7f744e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pascal, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:15:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:23.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:23 np0005596060 relaxed_pascal[267851]: {
Jan 26 13:15:23 np0005596060 relaxed_pascal[267851]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:15:23 np0005596060 relaxed_pascal[267851]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:15:23 np0005596060 relaxed_pascal[267851]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:15:23 np0005596060 relaxed_pascal[267851]:        "osd_id": 1,
Jan 26 13:15:23 np0005596060 relaxed_pascal[267851]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:15:23 np0005596060 relaxed_pascal[267851]:        "type": "bluestore"
Jan 26 13:15:23 np0005596060 relaxed_pascal[267851]:    }
Jan 26 13:15:23 np0005596060 relaxed_pascal[267851]: }
Jan 26 13:15:24 np0005596060 systemd[1]: libpod-b885a6fe0f6ae3f047b61083b465cbd0bb2dd8be4b65c6f02c647fac7f744e24.scope: Deactivated successfully.
Jan 26 13:15:24 np0005596060 podman[267835]: 2026-01-26 18:15:24.013862414 +0000 UTC m=+1.055386542 container died b885a6fe0f6ae3f047b61083b465cbd0bb2dd8be4b65c6f02c647fac7f744e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:15:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3a81e7cd2097861f160be4e611292150df2195958a0500b9a3b0ceb64424a218-merged.mount: Deactivated successfully.
Jan 26 13:15:24 np0005596060 podman[267835]: 2026-01-26 18:15:24.080274143 +0000 UTC m=+1.121798261 container remove b885a6fe0f6ae3f047b61083b465cbd0bb2dd8be4b65c6f02c647fac7f744e24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 13:15:24 np0005596060 systemd[1]: libpod-conmon-b885a6fe0f6ae3f047b61083b465cbd0bb2dd8be4b65c6f02c647fac7f744e24.scope: Deactivated successfully.
Jan 26 13:15:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:15:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:15:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:15:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:15:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev cab16bae-b691-4fb2-af73-286aec785adf does not exist
Jan 26 13:15:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d94e67a9-988c-44a6-8225-cc6853600ee9 does not exist
Jan 26 13:15:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 09631746-790f-414f-845b-11a14e0a92cb does not exist
Jan 26 13:15:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:24.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 88 MiB data, 280 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 26 13:15:24 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:15:24.843 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:15:24 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:15:24.844 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:15:24 np0005596060 nova_compute[247421]: 2026-01-26 18:15:24.882 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:25 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:15:25 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:15:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:25.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:25 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:15:25.846 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:15:26 np0005596060 nova_compute[247421]: 2026-01-26 18:15:26.127 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 88 MiB data, 280 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 69 op/s
Jan 26 13:15:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:26.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 26 13:15:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 26 13:15:26 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 26 13:15:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:27.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:27 np0005596060 nova_compute[247421]: 2026-01-26 18:15:27.389 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 26 13:15:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:28.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:29.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 26 13:15:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:30.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 26 13:15:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 26 13:15:30 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 26 13:15:31 np0005596060 nova_compute[247421]: 2026-01-26 18:15:31.130 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:31.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 4.6 KiB/s wr, 51 op/s
Jan 26 13:15:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:32.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:32 np0005596060 nova_compute[247421]: 2026-01-26 18:15:32.392 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:33.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 5.0 KiB/s wr, 52 op/s
Jan 26 13:15:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:34.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:35.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:36 np0005596060 nova_compute[247421]: 2026-01-26 18:15:36.132 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 2.6 KiB/s wr, 39 op/s
Jan 26 13:15:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:36.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:37.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:37 np0005596060 nova_compute[247421]: 2026-01-26 18:15:37.394 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 26 13:15:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 26 13:15:37 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 26 13:15:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 32 op/s
Jan 26 13:15:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:38.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:39.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.6 KiB/s wr, 26 op/s
Jan 26 13:15:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:40.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:41 np0005596060 nova_compute[247421]: 2026-01-26 18:15:41.133 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:41.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Jan 26 13:15:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:42.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:42 np0005596060 nova_compute[247421]: 2026-01-26 18:15:42.397 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:43.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:15:44
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'images', 'default.rgw.control', 'volumes', 'vms']
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 26 13:15:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:44.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:15:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:15:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:45.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:46 np0005596060 nova_compute[247421]: 2026-01-26 18:15:46.136 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 88 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 26 13:15:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:46.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:46 np0005596060 podman[267997]: 2026-01-26 18:15:46.80412978 +0000 UTC m=+0.056292465 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 26 13:15:46 np0005596060 podman[267998]: 2026-01-26 18:15:46.842969886 +0000 UTC m=+0.094614668 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 13:15:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 13:15:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:47.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 13:15:47 np0005596060 nova_compute[247421]: 2026-01-26 18:15:47.399 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 95 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 2.4 KiB/s rd, 349 KiB/s wr, 3 op/s
Jan 26 13:15:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:48.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:49.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 95 MiB data, 289 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 299 KiB/s wr, 3 op/s
Jan 26 13:15:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:50.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:51 np0005596060 nova_compute[247421]: 2026-01-26 18:15:51.138 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:51.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:15:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:52.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:52 np0005596060 nova_compute[247421]: 2026-01-26 18:15:52.401 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:53.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1245: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:15:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:54.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:55.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:56 np0005596060 nova_compute[247421]: 2026-01-26 18:15:56.141 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:15:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:56.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:56 np0005596060 nova_compute[247421]: 2026-01-26 18:15:56.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:15:56 np0005596060 nova_compute[247421]: 2026-01-26 18:15:56.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:15:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:57.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:57 np0005596060 nova_compute[247421]: 2026-01-26 18:15:57.404 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:15:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:15:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 13:15:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:15:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:15:58.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:15:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:15:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:15:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:15:59.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:15:59 np0005596060 nova_compute[247421]: 2026-01-26 18:15:59.647 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:16:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.5 MiB/s wr, 33 op/s
Jan 26 13:16:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:00.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:00 np0005596060 nova_compute[247421]: 2026-01-26 18:16:00.644 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:16:00 np0005596060 nova_compute[247421]: 2026-01-26 18:16:00.708 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:16:00 np0005596060 nova_compute[247421]: 2026-01-26 18:16:00.709 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:16:00 np0005596060 nova_compute[247421]: 2026-01-26 18:16:00.709 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:16:01 np0005596060 nova_compute[247421]: 2026-01-26 18:16:01.142 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:01.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:01 np0005596060 nova_compute[247421]: 2026-01-26 18:16:01.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:16:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.5 MiB/s wr, 87 op/s
Jan 26 13:16:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:02.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:02 np0005596060 nova_compute[247421]: 2026-01-26 18:16:02.406 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:03.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:03 np0005596060 nova_compute[247421]: 2026-01-26 18:16:03.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:16:03 np0005596060 nova_compute[247421]: 2026-01-26 18:16:03.718 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:16:03 np0005596060 nova_compute[247421]: 2026-01-26 18:16:03.718 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:16:03 np0005596060 nova_compute[247421]: 2026-01-26 18:16:03.719 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:16:03 np0005596060 nova_compute[247421]: 2026-01-26 18:16:03.719 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:16:03 np0005596060 nova_compute[247421]: 2026-01-26 18:16:03.719 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2968946293969849 quantized to 32 (current 32)
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:16:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:16:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:16:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:04.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:16:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514290821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:16:04 np0005596060 nova_compute[247421]: 2026-01-26 18:16:04.224 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:16:04 np0005596060 nova_compute[247421]: 2026-01-26 18:16:04.362 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:16:04 np0005596060 nova_compute[247421]: 2026-01-26 18:16:04.363 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4836MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:16:04 np0005596060 nova_compute[247421]: 2026-01-26 18:16:04.363 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:16:04 np0005596060 nova_compute[247421]: 2026-01-26 18:16:04.363 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:16:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:05.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:05 np0005596060 nova_compute[247421]: 2026-01-26 18:16:05.417 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:16:05 np0005596060 nova_compute[247421]: 2026-01-26 18:16:05.417 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:16:05 np0005596060 nova_compute[247421]: 2026-01-26 18:16:05.458 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:16:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:16:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1035754232' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:16:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:16:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1035754232' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:16:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:16:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741219996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:16:05 np0005596060 nova_compute[247421]: 2026-01-26 18:16:05.870 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:16:05 np0005596060 nova_compute[247421]: 2026-01-26 18:16:05.876 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:16:05 np0005596060 nova_compute[247421]: 2026-01-26 18:16:05.901 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:16:05 np0005596060 nova_compute[247421]: 2026-01-26 18:16:05.903 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:16:05 np0005596060 nova_compute[247421]: 2026-01-26 18:16:05.903 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:16:06 np0005596060 nova_compute[247421]: 2026-01-26 18:16:06.144 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:16:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:06.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:06 np0005596060 nova_compute[247421]: 2026-01-26 18:16:06.903 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:16:06 np0005596060 nova_compute[247421]: 2026-01-26 18:16:06.904 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:16:06 np0005596060 nova_compute[247421]: 2026-01-26 18:16:06.904 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:16:06 np0005596060 nova_compute[247421]: 2026-01-26 18:16:06.935 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:16:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:07.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 26 13:16:07 np0005596060 nova_compute[247421]: 2026-01-26 18:16:07.409 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 26 13:16:07 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 26 13:16:07 np0005596060 nova_compute[247421]: 2026-01-26 18:16:07.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:16:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 307 B/s wr, 96 op/s
Jan 26 13:16:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:08.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 26 13:16:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 26 13:16:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 26 13:16:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:09.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 26 13:16:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 26 13:16:09 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 26 13:16:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 134 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 511 B/s wr, 31 op/s
Jan 26 13:16:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:10.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:11 np0005596060 nova_compute[247421]: 2026-01-26 18:16:11.145 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:16:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/630511229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:16:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:16:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/630511229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:16:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:11.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 146 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Jan 26 13:16:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:12.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:12 np0005596060 nova_compute[247421]: 2026-01-26 18:16:12.411 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:16:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/878244880' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:16:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:16:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/878244880' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:16:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:13.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:16:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:16:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:16:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:16:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:16:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:16:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 154 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 125 op/s
Jan 26 13:16:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:16:14.197 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:16:14 np0005596060 nova_compute[247421]: 2026-01-26 18:16:14.198 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:16:14.199 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:16:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:14.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:16:14.746 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:16:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:16:14.748 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:16:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:16:14.748 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:16:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:15.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 154 MiB data, 324 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 105 op/s
Jan 26 13:16:16 np0005596060 nova_compute[247421]: 2026-01-26 18:16:16.208 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:16.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 26 13:16:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 26 13:16:16 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 26 13:16:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:17.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:17 np0005596060 nova_compute[247421]: 2026-01-26 18:16:17.414 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 26 13:16:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 26 13:16:17 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 26 13:16:17 np0005596060 podman[268206]: 2026-01-26 18:16:17.82240464 +0000 UTC m=+0.075395825 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:16:17 np0005596060 podman[268207]: 2026-01-26 18:16:17.863157454 +0000 UTC m=+0.116141398 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 13:16:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 104 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 169 op/s
Jan 26 13:16:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:18.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:19.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 2 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 296 active+clean; 104 MiB data, 306 MiB used, 21 GiB / 21 GiB avail; 499 KiB/s rd, 1.0 MiB/s wr, 70 op/s
Jan 26 13:16:20 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:16:20.201 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:16:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:16:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:20.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:16:21 np0005596060 nova_compute[247421]: 2026-01-26 18:16:21.210 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:21.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 49 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 66 KiB/s rd, 3.9 KiB/s wr, 92 op/s
Jan 26 13:16:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:22.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:22 np0005596060 nova_compute[247421]: 2026-01-26 18:16:22.415 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 26 13:16:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 26 13:16:23 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 26 13:16:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:23.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 41 MiB data, 275 MiB used, 21 GiB / 21 GiB avail; 77 KiB/s rd, 3.9 KiB/s wr, 105 op/s
Jan 26 13:16:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:24.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:16:25 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d7aee9d9-1e80-4ae8-bc54-8137b5e5c425 does not exist
Jan 26 13:16:25 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d49c9bcd-0b3e-4424-8e26-00cd12940e4e does not exist
Jan 26 13:16:25 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9e2f1739-dfd1-4c17-9694-20e1feefbec5 does not exist
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:16:25 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:16:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 41 MiB data, 275 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.7 KiB/s wr, 38 op/s
Jan 26 13:16:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:26.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:26 np0005596060 nova_compute[247421]: 2026-01-26 18:16:26.244 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:26 np0005596060 podman[268523]: 2026-01-26 18:16:26.287672514 +0000 UTC m=+0.062747157 container create a86ffe8e09c25c2c4ea18c2542a2ec87c116fef07de5ab792fa05ad28424ec6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:16:26 np0005596060 podman[268523]: 2026-01-26 18:16:26.268148844 +0000 UTC m=+0.043223507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:16:26 np0005596060 systemd[1]: Started libpod-conmon-a86ffe8e09c25c2c4ea18c2542a2ec87c116fef07de5ab792fa05ad28424ec6b.scope.
Jan 26 13:16:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:16:26 np0005596060 podman[268523]: 2026-01-26 18:16:26.903662657 +0000 UTC m=+0.678737330 container init a86ffe8e09c25c2c4ea18c2542a2ec87c116fef07de5ab792fa05ad28424ec6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cori, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:16:26 np0005596060 podman[268523]: 2026-01-26 18:16:26.917270329 +0000 UTC m=+0.692344972 container start a86ffe8e09c25c2c4ea18c2542a2ec87c116fef07de5ab792fa05ad28424ec6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cori, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 13:16:26 np0005596060 distracted_cori[268540]: 167 167
Jan 26 13:16:26 np0005596060 systemd[1]: libpod-a86ffe8e09c25c2c4ea18c2542a2ec87c116fef07de5ab792fa05ad28424ec6b.scope: Deactivated successfully.
Jan 26 13:16:26 np0005596060 conmon[268540]: conmon a86ffe8e09c25c2c4ea1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a86ffe8e09c25c2c4ea18c2542a2ec87c116fef07de5ab792fa05ad28424ec6b.scope/container/memory.events
Jan 26 13:16:27 np0005596060 podman[268523]: 2026-01-26 18:16:27.119384756 +0000 UTC m=+0.894459439 container attach a86ffe8e09c25c2c4ea18c2542a2ec87c116fef07de5ab792fa05ad28424ec6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cori, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:16:27 np0005596060 podman[268523]: 2026-01-26 18:16:27.121663534 +0000 UTC m=+0.896738187 container died a86ffe8e09c25c2c4ea18c2542a2ec87c116fef07de5ab792fa05ad28424ec6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cori, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 13:16:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1ac0767172831d767684535ecb8b4a2331574154b650417a4e47ef42210d5e8e-merged.mount: Deactivated successfully.
Jan 26 13:16:27 np0005596060 podman[268523]: 2026-01-26 18:16:27.223298177 +0000 UTC m=+0.998372820 container remove a86ffe8e09c25c2c4ea18c2542a2ec87c116fef07de5ab792fa05ad28424ec6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_cori, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:16:27 np0005596060 systemd[1]: libpod-conmon-a86ffe8e09c25c2c4ea18c2542a2ec87c116fef07de5ab792fa05ad28424ec6b.scope: Deactivated successfully.
Jan 26 13:16:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:27.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:27 np0005596060 podman[268566]: 2026-01-26 18:16:27.395860872 +0000 UTC m=+0.045758081 container create aecfb6c21862a40dbedc6f58a5304039d735b5198e1506eb5439aeff6644af48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:16:27 np0005596060 podman[268566]: 2026-01-26 18:16:27.377434559 +0000 UTC m=+0.027331788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:16:27 np0005596060 nova_compute[247421]: 2026-01-26 18:16:27.480 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:27 np0005596060 systemd[1]: Started libpod-conmon-aecfb6c21862a40dbedc6f58a5304039d735b5198e1506eb5439aeff6644af48.scope.
Jan 26 13:16:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:16:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd63f94ca67e1b212d094c65bd5181aa2a6e09e8d7d495e4b9abc5fb66d9b35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd63f94ca67e1b212d094c65bd5181aa2a6e09e8d7d495e4b9abc5fb66d9b35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd63f94ca67e1b212d094c65bd5181aa2a6e09e8d7d495e4b9abc5fb66d9b35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd63f94ca67e1b212d094c65bd5181aa2a6e09e8d7d495e4b9abc5fb66d9b35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd63f94ca67e1b212d094c65bd5181aa2a6e09e8d7d495e4b9abc5fb66d9b35/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:27 np0005596060 podman[268566]: 2026-01-26 18:16:27.544645019 +0000 UTC m=+0.194542248 container init aecfb6c21862a40dbedc6f58a5304039d735b5198e1506eb5439aeff6644af48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:16:27 np0005596060 podman[268566]: 2026-01-26 18:16:27.552855346 +0000 UTC m=+0.202752545 container start aecfb6c21862a40dbedc6f58a5304039d735b5198e1506eb5439aeff6644af48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:16:27 np0005596060 podman[268566]: 2026-01-26 18:16:27.555719488 +0000 UTC m=+0.205616697 container attach aecfb6c21862a40dbedc6f58a5304039d735b5198e1506eb5439aeff6644af48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shannon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:16:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 26 13:16:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:28.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:28 np0005596060 inspiring_shannon[268582]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:16:28 np0005596060 inspiring_shannon[268582]: --> relative data size: 1.0
Jan 26 13:16:28 np0005596060 inspiring_shannon[268582]: --> All data devices are unavailable
Jan 26 13:16:28 np0005596060 systemd[1]: libpod-aecfb6c21862a40dbedc6f58a5304039d735b5198e1506eb5439aeff6644af48.scope: Deactivated successfully.
Jan 26 13:16:28 np0005596060 podman[268566]: 2026-01-26 18:16:28.337846005 +0000 UTC m=+0.987743214 container died aecfb6c21862a40dbedc6f58a5304039d735b5198e1506eb5439aeff6644af48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shannon, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:16:28 np0005596060 systemd[1]: var-lib-containers-storage-overlay-5bd63f94ca67e1b212d094c65bd5181aa2a6e09e8d7d495e4b9abc5fb66d9b35-merged.mount: Deactivated successfully.
Jan 26 13:16:28 np0005596060 podman[268566]: 2026-01-26 18:16:28.398074168 +0000 UTC m=+1.047971377 container remove aecfb6c21862a40dbedc6f58a5304039d735b5198e1506eb5439aeff6644af48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:16:28 np0005596060 systemd[1]: libpod-conmon-aecfb6c21862a40dbedc6f58a5304039d735b5198e1506eb5439aeff6644af48.scope: Deactivated successfully.
Jan 26 13:16:29 np0005596060 podman[268751]: 2026-01-26 18:16:29.00757497 +0000 UTC m=+0.021416419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:16:29 np0005596060 podman[268751]: 2026-01-26 18:16:29.247980319 +0000 UTC m=+0.261821748 container create ab7d419141f82b4fc33425815a316e913352ec2c2108a7d1a3dcc55961c76a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_raman, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 13:16:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:29.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:29 np0005596060 systemd[1]: Started libpod-conmon-ab7d419141f82b4fc33425815a316e913352ec2c2108a7d1a3dcc55961c76a78.scope.
Jan 26 13:16:29 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:16:29 np0005596060 podman[268751]: 2026-01-26 18:16:29.354766761 +0000 UTC m=+0.368608200 container init ab7d419141f82b4fc33425815a316e913352ec2c2108a7d1a3dcc55961c76a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:16:29 np0005596060 podman[268751]: 2026-01-26 18:16:29.362730892 +0000 UTC m=+0.376572321 container start ab7d419141f82b4fc33425815a316e913352ec2c2108a7d1a3dcc55961c76a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:16:29 np0005596060 podman[268751]: 2026-01-26 18:16:29.367655815 +0000 UTC m=+0.381497244 container attach ab7d419141f82b4fc33425815a316e913352ec2c2108a7d1a3dcc55961c76a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_raman, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 13:16:29 np0005596060 admiring_raman[268767]: 167 167
Jan 26 13:16:29 np0005596060 systemd[1]: libpod-ab7d419141f82b4fc33425815a316e913352ec2c2108a7d1a3dcc55961c76a78.scope: Deactivated successfully.
Jan 26 13:16:29 np0005596060 podman[268751]: 2026-01-26 18:16:29.369860951 +0000 UTC m=+0.383702390 container died ab7d419141f82b4fc33425815a316e913352ec2c2108a7d1a3dcc55961c76a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_raman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 13:16:29 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d6d20d5f52fee8632f1d418ec5efdc2cebc5d2c2c4016587ce9e1f8d29f66fc3-merged.mount: Deactivated successfully.
Jan 26 13:16:29 np0005596060 podman[268751]: 2026-01-26 18:16:29.418092972 +0000 UTC m=+0.431934401 container remove ab7d419141f82b4fc33425815a316e913352ec2c2108a7d1a3dcc55961c76a78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_raman, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:16:29 np0005596060 systemd[1]: libpod-conmon-ab7d419141f82b4fc33425815a316e913352ec2c2108a7d1a3dcc55961c76a78.scope: Deactivated successfully.
Jan 26 13:16:29 np0005596060 podman[268789]: 2026-01-26 18:16:29.603006897 +0000 UTC m=+0.047944015 container create 1a803f1d0c957f17cdc8e2fa290715c570ecada2919f69e8d30cb76e111adc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 13:16:29 np0005596060 systemd[1]: Started libpod-conmon-1a803f1d0c957f17cdc8e2fa290715c570ecada2919f69e8d30cb76e111adc04.scope.
Jan 26 13:16:29 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:16:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d78553d6131cf4508465e5bcaae7989a2fa32aa2a225c0f933c53d96186293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d78553d6131cf4508465e5bcaae7989a2fa32aa2a225c0f933c53d96186293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d78553d6131cf4508465e5bcaae7989a2fa32aa2a225c0f933c53d96186293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65d78553d6131cf4508465e5bcaae7989a2fa32aa2a225c0f933c53d96186293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:29 np0005596060 podman[268789]: 2026-01-26 18:16:29.585842336 +0000 UTC m=+0.030779484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:16:29 np0005596060 podman[268789]: 2026-01-26 18:16:29.689319206 +0000 UTC m=+0.134256344 container init 1a803f1d0c957f17cdc8e2fa290715c570ecada2919f69e8d30cb76e111adc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 13:16:29 np0005596060 podman[268789]: 2026-01-26 18:16:29.697096791 +0000 UTC m=+0.142033909 container start 1a803f1d0c957f17cdc8e2fa290715c570ecada2919f69e8d30cb76e111adc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elion, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:16:29 np0005596060 podman[268789]: 2026-01-26 18:16:29.700501417 +0000 UTC m=+0.145438555 container attach 1a803f1d0c957f17cdc8e2fa290715c570ecada2919f69e8d30cb76e111adc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 13:16:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 26 13:16:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:30 np0005596060 keen_elion[268805]: {
Jan 26 13:16:30 np0005596060 keen_elion[268805]:    "1": [
Jan 26 13:16:30 np0005596060 keen_elion[268805]:        {
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "devices": [
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "/dev/loop3"
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            ],
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "lv_name": "ceph_lv0",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "lv_size": "7511998464",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "name": "ceph_lv0",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "tags": {
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.cluster_name": "ceph",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.crush_device_class": "",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.encrypted": "0",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.osd_id": "1",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.type": "block",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:                "ceph.vdo": "0"
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            },
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "type": "block",
Jan 26 13:16:30 np0005596060 keen_elion[268805]:            "vg_name": "ceph_vg0"
Jan 26 13:16:30 np0005596060 keen_elion[268805]:        }
Jan 26 13:16:30 np0005596060 keen_elion[268805]:    ]
Jan 26 13:16:30 np0005596060 keen_elion[268805]: }
Jan 26 13:16:30 np0005596060 systemd[1]: libpod-1a803f1d0c957f17cdc8e2fa290715c570ecada2919f69e8d30cb76e111adc04.scope: Deactivated successfully.
Jan 26 13:16:30 np0005596060 podman[268789]: 2026-01-26 18:16:30.411399984 +0000 UTC m=+0.856337132 container died 1a803f1d0c957f17cdc8e2fa290715c570ecada2919f69e8d30cb76e111adc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:16:31 np0005596060 systemd[1]: var-lib-containers-storage-overlay-65d78553d6131cf4508465e5bcaae7989a2fa32aa2a225c0f933c53d96186293-merged.mount: Deactivated successfully.
Jan 26 13:16:31 np0005596060 nova_compute[247421]: 2026-01-26 18:16:31.246 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:31.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:31 np0005596060 podman[268789]: 2026-01-26 18:16:31.290139309 +0000 UTC m=+1.735076427 container remove 1a803f1d0c957f17cdc8e2fa290715c570ecada2919f69e8d30cb76e111adc04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:16:31 np0005596060 systemd[1]: libpod-conmon-1a803f1d0c957f17cdc8e2fa290715c570ecada2919f69e8d30cb76e111adc04.scope: Deactivated successfully.
Jan 26 13:16:32 np0005596060 podman[269020]: 2026-01-26 18:16:31.99953759 +0000 UTC m=+0.028291802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:16:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 7.8 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 13:16:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:32.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:32 np0005596060 podman[269020]: 2026-01-26 18:16:32.329116629 +0000 UTC m=+0.357870791 container create 194e1dcc04163ddba3f7c4b4083c33cc28e983bea5189bf43613c3ec634b8c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Jan 26 13:16:32 np0005596060 nova_compute[247421]: 2026-01-26 18:16:32.484 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:32 np0005596060 systemd[1]: Started libpod-conmon-194e1dcc04163ddba3f7c4b4083c33cc28e983bea5189bf43613c3ec634b8c3f.scope.
Jan 26 13:16:32 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:16:32 np0005596060 podman[269020]: 2026-01-26 18:16:32.574654227 +0000 UTC m=+0.603408379 container init 194e1dcc04163ddba3f7c4b4083c33cc28e983bea5189bf43613c3ec634b8c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:16:32 np0005596060 podman[269020]: 2026-01-26 18:16:32.582561666 +0000 UTC m=+0.611315788 container start 194e1dcc04163ddba3f7c4b4083c33cc28e983bea5189bf43613c3ec634b8c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:16:32 np0005596060 distracted_elgamal[269038]: 167 167
Jan 26 13:16:32 np0005596060 systemd[1]: libpod-194e1dcc04163ddba3f7c4b4083c33cc28e983bea5189bf43613c3ec634b8c3f.scope: Deactivated successfully.
Jan 26 13:16:32 np0005596060 podman[269020]: 2026-01-26 18:16:32.802389838 +0000 UTC m=+0.831143990 container attach 194e1dcc04163ddba3f7c4b4083c33cc28e983bea5189bf43613c3ec634b8c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:16:32 np0005596060 podman[269020]: 2026-01-26 18:16:32.803357093 +0000 UTC m=+0.832111275 container died 194e1dcc04163ddba3f7c4b4083c33cc28e983bea5189bf43613c3ec634b8c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:16:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c9a6fd6a0cff4e7495ec87215d660dd53f6c0388b4496087a77f55bb3d784ac3-merged.mount: Deactivated successfully.
Jan 26 13:16:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:33 np0005596060 podman[269020]: 2026-01-26 18:16:33.166343732 +0000 UTC m=+1.195097854 container remove 194e1dcc04163ddba3f7c4b4083c33cc28e983bea5189bf43613c3ec634b8c3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_elgamal, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:16:33 np0005596060 systemd[1]: libpod-conmon-194e1dcc04163ddba3f7c4b4083c33cc28e983bea5189bf43613c3ec634b8c3f.scope: Deactivated successfully.
Jan 26 13:16:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:33.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:33 np0005596060 podman[269064]: 2026-01-26 18:16:33.377603089 +0000 UTC m=+0.033692698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:16:33 np0005596060 podman[269064]: 2026-01-26 18:16:33.821088499 +0000 UTC m=+0.477178108 container create 2132f01f4834a8227330f8f94cdc833b519f6b2c1511f0089ed64de8af4c41b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:16:33 np0005596060 systemd[1]: Started libpod-conmon-2132f01f4834a8227330f8f94cdc833b519f6b2c1511f0089ed64de8af4c41b7.scope.
Jan 26 13:16:33 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:16:33 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b99a41d6084cfb1d251a47514dfd3e78a631e376d299a946e365886c5c845eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:33 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b99a41d6084cfb1d251a47514dfd3e78a631e376d299a946e365886c5c845eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:33 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b99a41d6084cfb1d251a47514dfd3e78a631e376d299a946e365886c5c845eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:33 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b99a41d6084cfb1d251a47514dfd3e78a631e376d299a946e365886c5c845eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:16:33 np0005596060 podman[269064]: 2026-01-26 18:16:33.922798224 +0000 UTC m=+0.578887813 container init 2132f01f4834a8227330f8f94cdc833b519f6b2c1511f0089ed64de8af4c41b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:16:33 np0005596060 podman[269064]: 2026-01-26 18:16:33.929898192 +0000 UTC m=+0.585987781 container start 2132f01f4834a8227330f8f94cdc833b519f6b2c1511f0089ed64de8af4c41b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:16:34 np0005596060 podman[269064]: 2026-01-26 18:16:34.06596683 +0000 UTC m=+0.722056419 container attach 2132f01f4834a8227330f8f94cdc833b519f6b2c1511f0089ed64de8af4c41b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:16:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 93 B/s rd, 0 B/s wr, 0 op/s
Jan 26 13:16:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:34.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:34 np0005596060 compassionate_vaughan[269080]: {
Jan 26 13:16:34 np0005596060 compassionate_vaughan[269080]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:16:34 np0005596060 compassionate_vaughan[269080]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:16:34 np0005596060 compassionate_vaughan[269080]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:16:34 np0005596060 compassionate_vaughan[269080]:        "osd_id": 1,
Jan 26 13:16:34 np0005596060 compassionate_vaughan[269080]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:16:34 np0005596060 compassionate_vaughan[269080]:        "type": "bluestore"
Jan 26 13:16:34 np0005596060 compassionate_vaughan[269080]:    }
Jan 26 13:16:34 np0005596060 compassionate_vaughan[269080]: }
Jan 26 13:16:34 np0005596060 systemd[1]: libpod-2132f01f4834a8227330f8f94cdc833b519f6b2c1511f0089ed64de8af4c41b7.scope: Deactivated successfully.
Jan 26 13:16:34 np0005596060 podman[269064]: 2026-01-26 18:16:34.894509584 +0000 UTC m=+1.550599173 container died 2132f01f4834a8227330f8f94cdc833b519f6b2c1511f0089ed64de8af4c41b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:16:34 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7b99a41d6084cfb1d251a47514dfd3e78a631e376d299a946e365886c5c845eb-merged.mount: Deactivated successfully.
Jan 26 13:16:34 np0005596060 podman[269064]: 2026-01-26 18:16:34.986408443 +0000 UTC m=+1.642498032 container remove 2132f01f4834a8227330f8f94cdc833b519f6b2c1511f0089ed64de8af4c41b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_vaughan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:16:34 np0005596060 systemd[1]: libpod-conmon-2132f01f4834a8227330f8f94cdc833b519f6b2c1511f0089ed64de8af4c41b7.scope: Deactivated successfully.
Jan 26 13:16:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:16:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:16:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:16:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:16:35 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 1fb2f01a-3b1d-4a51-92af-df371cd5ff5f does not exist
Jan 26 13:16:35 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ea112370-8cc3-4c64-b1cd-de96fca8d425 does not exist
Jan 26 13:16:35 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 28f52f8c-bfae-4cb6-8dec-e2fef266788a does not exist
Jan 26 13:16:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:16:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:35.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:16:35 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:16:35 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:16:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 13:16:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:36.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:36 np0005596060 nova_compute[247421]: 2026-01-26 18:16:36.249 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:37.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:37 np0005596060 nova_compute[247421]: 2026-01-26 18:16:37.487 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 13:16:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:38.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:39.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:16:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:40.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:16:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/682949984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:16:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:16:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/682949984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:16:41 np0005596060 nova_compute[247421]: 2026-01-26 18:16:41.250 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:41.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:16:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:42.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:42 np0005596060 nova_compute[247421]: 2026-01-26 18:16:42.490 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:43.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:16:44
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'vms']
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:16:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:44.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:16:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:16:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:45.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:16:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:46.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:46 np0005596060 nova_compute[247421]: 2026-01-26 18:16:46.253 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:47.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:47 np0005596060 nova_compute[247421]: 2026-01-26 18:16:47.493 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:16:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:48.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:48 np0005596060 podman[269173]: 2026-01-26 18:16:48.830919887 +0000 UTC m=+0.088490924 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 13:16:48 np0005596060 podman[269174]: 2026-01-26 18:16:48.840430886 +0000 UTC m=+0.098219939 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 13:16:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:49.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:16:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 26 13:16:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:50.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 26 13:16:51 np0005596060 nova_compute[247421]: 2026-01-26 18:16:51.255 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:16:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:51.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:16:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:16:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:52.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:52 np0005596060 nova_compute[247421]: 2026-01-26 18:16:52.496 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:53.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:16:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:54.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:55.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:16:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:56.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:56 np0005596060 nova_compute[247421]: 2026-01-26 18:16:56.353 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:56 np0005596060 nova_compute[247421]: 2026-01-26 18:16:56.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:16:56 np0005596060 nova_compute[247421]: 2026-01-26 18:16:56.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:16:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:57.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:16:57 np0005596060 nova_compute[247421]: 2026-01-26 18:16:57.497 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:16:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:16:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:16:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:16:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:16:58.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:16:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:16:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:16:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:16:59.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:17:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:00 np0005596060 nova_compute[247421]: 2026-01-26 18:17:00.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:00 np0005596060 nova_compute[247421]: 2026-01-26 18:17:00.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:01.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:01 np0005596060 nova_compute[247421]: 2026-01-26 18:17:01.355 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:01 np0005596060 nova_compute[247421]: 2026-01-26 18:17:01.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:17:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:02.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:02 np0005596060 nova_compute[247421]: 2026-01-26 18:17:02.500 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:02 np0005596060 nova_compute[247421]: 2026-01-26 18:17:02.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:02 np0005596060 nova_compute[247421]: 2026-01-26 18:17:02.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:03.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:17:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:17:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:17:04.181 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:17:04 np0005596060 nova_compute[247421]: 2026-01-26 18:17:04.182 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:17:04.182 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:17:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:17:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:04.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:04 np0005596060 nova_compute[247421]: 2026-01-26 18:17:04.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:05.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:05 np0005596060 nova_compute[247421]: 2026-01-26 18:17:05.680 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:05 np0005596060 nova_compute[247421]: 2026-01-26 18:17:05.681 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:17:05 np0005596060 nova_compute[247421]: 2026-01-26 18:17:05.681 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:17:05 np0005596060 nova_compute[247421]: 2026-01-26 18:17:05.720 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:17:05 np0005596060 nova_compute[247421]: 2026-01-26 18:17:05.721 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:05 np0005596060 nova_compute[247421]: 2026-01-26 18:17:05.777 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:17:05 np0005596060 nova_compute[247421]: 2026-01-26 18:17:05.777 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:17:05 np0005596060 nova_compute[247421]: 2026-01-26 18:17:05.778 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:17:05 np0005596060 nova_compute[247421]: 2026-01-26 18:17:05.778 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:17:05 np0005596060 nova_compute[247421]: 2026-01-26 18:17:05.779 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:17:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:17:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:17:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024554579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.250 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:17:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:06.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.358 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.438 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.439 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4868MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.440 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.440 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.587 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.587 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.816 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing inventories for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.895 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating ProviderTree inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.896 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.929 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing aggregate associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.954 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing trait associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 26 13:17:06 np0005596060 nova_compute[247421]: 2026-01-26 18:17:06.998 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:17:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:07.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:17:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3328827057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:17:07 np0005596060 nova_compute[247421]: 2026-01-26 18:17:07.430 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:17:07 np0005596060 nova_compute[247421]: 2026-01-26 18:17:07.435 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:17:07 np0005596060 nova_compute[247421]: 2026-01-26 18:17:07.449 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:17:07 np0005596060 nova_compute[247421]: 2026-01-26 18:17:07.451 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:17:07 np0005596060 nova_compute[247421]: 2026-01-26 18:17:07.451 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.011s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:17:07 np0005596060 nova_compute[247421]: 2026-01-26 18:17:07.451 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:07 np0005596060 nova_compute[247421]: 2026-01-26 18:17:07.451 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 26 13:17:07 np0005596060 nova_compute[247421]: 2026-01-26 18:17:07.515 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:17:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:08.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 26 13:17:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 26 13:17:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 26 13:17:08 np0005596060 nova_compute[247421]: 2026-01-26 18:17:08.667 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:08 np0005596060 nova_compute[247421]: 2026-01-26 18:17:08.668 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:08 np0005596060 nova_compute[247421]: 2026-01-26 18:17:08.668 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 26 13:17:08 np0005596060 nova_compute[247421]: 2026-01-26 18:17:08.687 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 26 13:17:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:09.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 32 op/s
Jan 26 13:17:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:10.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:11.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:11 np0005596060 nova_compute[247421]: 2026-01-26 18:17:11.359 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 242 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 26 13:17:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:12.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:12 np0005596060 nova_compute[247421]: 2026-01-26 18:17:12.518 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:13.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:17:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:17:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:17:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:17:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:17:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:17:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:17:14.185 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:17:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 893 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 26 13:17:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:14.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:17:14.747 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:17:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:17:14.747 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:17:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:17:14.748 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:17:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:15.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 893 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 26 13:17:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:16.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:16 np0005596060 nova_compute[247421]: 2026-01-26 18:17:16.362 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:17.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:17 np0005596060 nova_compute[247421]: 2026-01-26 18:17:17.519 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 135 op/s
Jan 26 13:17:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:19.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:19 np0005596060 podman[269380]: 2026-01-26 18:17:19.793579469 +0000 UTC m=+0.059847865 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 13:17:19 np0005596060 podman[269381]: 2026-01-26 18:17:19.818094725 +0000 UTC m=+0.083840567 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:17:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 116 op/s
Jan 26 13:17:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:20.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:21.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:21 np0005596060 nova_compute[247421]: 2026-01-26 18:17:21.365 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 112 op/s
Jan 26 13:17:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:22.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:22 np0005596060 nova_compute[247421]: 2026-01-26 18:17:22.522 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:23.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 87 op/s
Jan 26 13:17:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:24.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:25.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 KiB/s wr, 65 op/s
Jan 26 13:17:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:26.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:26 np0005596060 nova_compute[247421]: 2026-01-26 18:17:26.367 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:27.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:27 np0005596060 nova_compute[247421]: 2026-01-26 18:17:27.525 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 KiB/s wr, 65 op/s
Jan 26 13:17:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:28.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:29.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:17:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:17:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:30.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:17:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:31.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:31 np0005596060 nova_compute[247421]: 2026-01-26 18:17:31.370 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 59 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.0 MiB/s wr, 27 op/s
Jan 26 13:17:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:32.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:32 np0005596060 nova_compute[247421]: 2026-01-26 18:17:32.528 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 26 13:17:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:33.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 66 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 28 op/s
Jan 26 13:17:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 26 13:17:34 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 26 13:17:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:34.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:35.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 66 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 34 op/s
Jan 26 13:17:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:36.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:36 np0005596060 nova_compute[247421]: 2026-01-26 18:17:36.372 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:17:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:17:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:17:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:17:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:17:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:37.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:37 np0005596060 nova_compute[247421]: 2026-01-26 18:17:37.534 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:17:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 26 13:17:38 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ecbae702-6232-4abd-8261-edac7d8a0efb does not exist
Jan 26 13:17:38 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ee300363-f751-4135-8b19-38ac0cc32064 does not exist
Jan 26 13:17:38 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7201a6f3-c3af-4e3e-b34e-9058985741e8 does not exist
Jan 26 13:17:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:38.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:17:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:17:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:17:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:17:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:17:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:17:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:17:39 np0005596060 podman[269757]: 2026-01-26 18:17:39.150826454 +0000 UTC m=+0.038148499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:17:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:39.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 26 13:17:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:40.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:40 np0005596060 podman[269757]: 2026-01-26 18:17:40.55029847 +0000 UTC m=+1.437620495 container create ce5a786960e392eb8395eac8580ff19b64ae56b3854afc79939fb144b2a44c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_clarke, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 13:17:40 np0005596060 systemd[1]: Started libpod-conmon-ce5a786960e392eb8395eac8580ff19b64ae56b3854afc79939fb144b2a44c24.scope.
Jan 26 13:17:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:17:40 np0005596060 podman[269757]: 2026-01-26 18:17:40.743031911 +0000 UTC m=+1.630353956 container init ce5a786960e392eb8395eac8580ff19b64ae56b3854afc79939fb144b2a44c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_clarke, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 13:17:40 np0005596060 podman[269757]: 2026-01-26 18:17:40.752658173 +0000 UTC m=+1.639980198 container start ce5a786960e392eb8395eac8580ff19b64ae56b3854afc79939fb144b2a44c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_clarke, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 13:17:40 np0005596060 podman[269757]: 2026-01-26 18:17:40.758010328 +0000 UTC m=+1.645332373 container attach ce5a786960e392eb8395eac8580ff19b64ae56b3854afc79939fb144b2a44c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 13:17:40 np0005596060 quirky_clarke[269774]: 167 167
Jan 26 13:17:40 np0005596060 systemd[1]: libpod-ce5a786960e392eb8395eac8580ff19b64ae56b3854afc79939fb144b2a44c24.scope: Deactivated successfully.
Jan 26 13:17:40 np0005596060 podman[269757]: 2026-01-26 18:17:40.762611553 +0000 UTC m=+1.649933578 container died ce5a786960e392eb8395eac8580ff19b64ae56b3854afc79939fb144b2a44c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_clarke, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 13:17:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:17:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:17:40 np0005596060 systemd[1]: var-lib-containers-storage-overlay-56c32372d824bfdbdea34fbe15d786e3e069748083859a8d3f32167a860cfe5f-merged.mount: Deactivated successfully.
Jan 26 13:17:40 np0005596060 podman[269757]: 2026-01-26 18:17:40.817461021 +0000 UTC m=+1.704783056 container remove ce5a786960e392eb8395eac8580ff19b64ae56b3854afc79939fb144b2a44c24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:17:40 np0005596060 systemd[1]: libpod-conmon-ce5a786960e392eb8395eac8580ff19b64ae56b3854afc79939fb144b2a44c24.scope: Deactivated successfully.
Jan 26 13:17:41 np0005596060 podman[269797]: 2026-01-26 18:17:41.027782644 +0000 UTC m=+0.075176439 container create 459fcf4f3881e453a9cab454dfee09fd1e32a40dda12b3906bf8403e6e3c9958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shtern, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Jan 26 13:17:41 np0005596060 podman[269797]: 2026-01-26 18:17:40.978487056 +0000 UTC m=+0.025880871 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:17:41 np0005596060 systemd[1]: Started libpod-conmon-459fcf4f3881e453a9cab454dfee09fd1e32a40dda12b3906bf8403e6e3c9958.scope.
Jan 26 13:17:41 np0005596060 nova_compute[247421]: 2026-01-26 18:17:41.374 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:41.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:41 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:17:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72ae73c30f38846621047b755a4f8603df02315724a77ce53abae897b715ce7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72ae73c30f38846621047b755a4f8603df02315724a77ce53abae897b715ce7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72ae73c30f38846621047b755a4f8603df02315724a77ce53abae897b715ce7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72ae73c30f38846621047b755a4f8603df02315724a77ce53abae897b715ce7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a72ae73c30f38846621047b755a4f8603df02315724a77ce53abae897b715ce7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:41 np0005596060 podman[269797]: 2026-01-26 18:17:41.536962276 +0000 UTC m=+0.584356091 container init 459fcf4f3881e453a9cab454dfee09fd1e32a40dda12b3906bf8403e6e3c9958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shtern, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:17:41 np0005596060 podman[269797]: 2026-01-26 18:17:41.544189637 +0000 UTC m=+0.591583432 container start 459fcf4f3881e453a9cab454dfee09fd1e32a40dda12b3906bf8403e6e3c9958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 13:17:41 np0005596060 podman[269797]: 2026-01-26 18:17:41.547527571 +0000 UTC m=+0.594921376 container attach 459fcf4f3881e453a9cab454dfee09fd1e32a40dda12b3906bf8403e6e3c9958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shtern, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:17:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 946 KiB/s wr, 40 op/s
Jan 26 13:17:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:42.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:42 np0005596060 priceless_shtern[269813]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:17:42 np0005596060 priceless_shtern[269813]: --> relative data size: 1.0
Jan 26 13:17:42 np0005596060 priceless_shtern[269813]: --> All data devices are unavailable
Jan 26 13:17:42 np0005596060 systemd[1]: libpod-459fcf4f3881e453a9cab454dfee09fd1e32a40dda12b3906bf8403e6e3c9958.scope: Deactivated successfully.
Jan 26 13:17:42 np0005596060 podman[269797]: 2026-01-26 18:17:42.358305598 +0000 UTC m=+1.405699393 container died 459fcf4f3881e453a9cab454dfee09fd1e32a40dda12b3906bf8403e6e3c9958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 13:17:42 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a72ae73c30f38846621047b755a4f8603df02315724a77ce53abae897b715ce7-merged.mount: Deactivated successfully.
Jan 26 13:17:42 np0005596060 podman[269797]: 2026-01-26 18:17:42.412580891 +0000 UTC m=+1.459974686 container remove 459fcf4f3881e453a9cab454dfee09fd1e32a40dda12b3906bf8403e6e3c9958 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shtern, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:17:42 np0005596060 systemd[1]: libpod-conmon-459fcf4f3881e453a9cab454dfee09fd1e32a40dda12b3906bf8403e6e3c9958.scope: Deactivated successfully.
Jan 26 13:17:42 np0005596060 nova_compute[247421]: 2026-01-26 18:17:42.536 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 26 13:17:43 np0005596060 podman[269979]: 2026-01-26 18:17:43.011437915 +0000 UTC m=+0.044115299 container create 7fd19cbb1ec6568c10c0bcc11e7c5c681089e12fc3e6d515faed508909f4c52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:17:43 np0005596060 systemd[1]: Started libpod-conmon-7fd19cbb1ec6568c10c0bcc11e7c5c681089e12fc3e6d515faed508909f4c52e.scope.
Jan 26 13:17:43 np0005596060 podman[269979]: 2026-01-26 18:17:42.992066779 +0000 UTC m=+0.024744183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:17:43 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:17:43 np0005596060 podman[269979]: 2026-01-26 18:17:43.108409461 +0000 UTC m=+0.141086855 container init 7fd19cbb1ec6568c10c0bcc11e7c5c681089e12fc3e6d515faed508909f4c52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:17:43 np0005596060 podman[269979]: 2026-01-26 18:17:43.113562531 +0000 UTC m=+0.146239915 container start 7fd19cbb1ec6568c10c0bcc11e7c5c681089e12fc3e6d515faed508909f4c52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:17:43 np0005596060 podman[269979]: 2026-01-26 18:17:43.11671006 +0000 UTC m=+0.149387444 container attach 7fd19cbb1ec6568c10c0bcc11e7c5c681089e12fc3e6d515faed508909f4c52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:17:43 np0005596060 competent_goldstine[269995]: 167 167
Jan 26 13:17:43 np0005596060 systemd[1]: libpod-7fd19cbb1ec6568c10c0bcc11e7c5c681089e12fc3e6d515faed508909f4c52e.scope: Deactivated successfully.
Jan 26 13:17:43 np0005596060 podman[269979]: 2026-01-26 18:17:43.119245893 +0000 UTC m=+0.151923297 container died 7fd19cbb1ec6568c10c0bcc11e7c5c681089e12fc3e6d515faed508909f4c52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:17:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 26 13:17:43 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c56f477ec031fdcf7d5c5d2b31cc7e0433c1b9b2b189d2d6678598a56b6f4cad-merged.mount: Deactivated successfully.
Jan 26 13:17:43 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 26 13:17:43 np0005596060 podman[269979]: 2026-01-26 18:17:43.174714647 +0000 UTC m=+0.207392031 container remove 7fd19cbb1ec6568c10c0bcc11e7c5c681089e12fc3e6d515faed508909f4c52e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 13:17:43 np0005596060 systemd[1]: libpod-conmon-7fd19cbb1ec6568c10c0bcc11e7c5c681089e12fc3e6d515faed508909f4c52e.scope: Deactivated successfully.
Jan 26 13:17:43 np0005596060 podman[270019]: 2026-01-26 18:17:43.351527379 +0000 UTC m=+0.045957026 container create c025e757b2e8acb421b2d2881d5c4dc7ec6ee6294c26ff52ca7eb3de5874f1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:17:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:43.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:43 np0005596060 systemd[1]: Started libpod-conmon-c025e757b2e8acb421b2d2881d5c4dc7ec6ee6294c26ff52ca7eb3de5874f1c1.scope.
Jan 26 13:17:43 np0005596060 podman[270019]: 2026-01-26 18:17:43.331658569 +0000 UTC m=+0.026088236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:17:43 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:17:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc4eec74ab62f7b8e17f801ec738a2bda7231268724a9350e9768ad8a20eafe7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc4eec74ab62f7b8e17f801ec738a2bda7231268724a9350e9768ad8a20eafe7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc4eec74ab62f7b8e17f801ec738a2bda7231268724a9350e9768ad8a20eafe7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc4eec74ab62f7b8e17f801ec738a2bda7231268724a9350e9768ad8a20eafe7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:43 np0005596060 podman[270019]: 2026-01-26 18:17:43.460285321 +0000 UTC m=+0.154714988 container init c025e757b2e8acb421b2d2881d5c4dc7ec6ee6294c26ff52ca7eb3de5874f1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:17:43 np0005596060 podman[270019]: 2026-01-26 18:17:43.470515428 +0000 UTC m=+0.164945075 container start c025e757b2e8acb421b2d2881d5c4dc7ec6ee6294c26ff52ca7eb3de5874f1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:17:43 np0005596060 podman[270019]: 2026-01-26 18:17:43.477604136 +0000 UTC m=+0.172033803 container attach c025e757b2e8acb421b2d2881d5c4dc7ec6ee6294c26ff52ca7eb3de5874f1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:17:44
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'backups', 'vms', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data', '.rgw.root']
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 405 KiB/s rd, 688 KiB/s wr, 56 op/s
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]: {
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:    "1": [
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:        {
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "devices": [
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "/dev/loop3"
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            ],
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "lv_name": "ceph_lv0",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "lv_size": "7511998464",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "name": "ceph_lv0",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "tags": {
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.cluster_name": "ceph",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.crush_device_class": "",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.encrypted": "0",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.osd_id": "1",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.type": "block",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:                "ceph.vdo": "0"
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            },
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "type": "block",
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:            "vg_name": "ceph_vg0"
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:        }
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]:    ]
Jan 26 13:17:44 np0005596060 vigilant_greider[270036]: }
Jan 26 13:17:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:44.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:44 np0005596060 systemd[1]: libpod-c025e757b2e8acb421b2d2881d5c4dc7ec6ee6294c26ff52ca7eb3de5874f1c1.scope: Deactivated successfully.
Jan 26 13:17:44 np0005596060 podman[270019]: 2026-01-26 18:17:44.312797717 +0000 UTC m=+1.007227364 container died c025e757b2e8acb421b2d2881d5c4dc7ec6ee6294c26ff52ca7eb3de5874f1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:17:44 np0005596060 systemd[1]: var-lib-containers-storage-overlay-cc4eec74ab62f7b8e17f801ec738a2bda7231268724a9350e9768ad8a20eafe7-merged.mount: Deactivated successfully.
Jan 26 13:17:44 np0005596060 podman[270019]: 2026-01-26 18:17:44.375390189 +0000 UTC m=+1.069819836 container remove c025e757b2e8acb421b2d2881d5c4dc7ec6ee6294c26ff52ca7eb3de5874f1c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 13:17:44 np0005596060 systemd[1]: libpod-conmon-c025e757b2e8acb421b2d2881d5c4dc7ec6ee6294c26ff52ca7eb3de5874f1c1.scope: Deactivated successfully.
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:17:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:17:44 np0005596060 podman[270196]: 2026-01-26 18:17:44.943324356 +0000 UTC m=+0.039935804 container create 457f438144d8dc8cf99e62b62e0e126af0e52e8447a57d81a8b4080baa64e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:17:44 np0005596060 systemd[1]: Started libpod-conmon-457f438144d8dc8cf99e62b62e0e126af0e52e8447a57d81a8b4080baa64e259.scope.
Jan 26 13:17:45 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:17:45 np0005596060 podman[270196]: 2026-01-26 18:17:44.924736489 +0000 UTC m=+0.021347937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:17:45 np0005596060 podman[270196]: 2026-01-26 18:17:45.029951412 +0000 UTC m=+0.126562870 container init 457f438144d8dc8cf99e62b62e0e126af0e52e8447a57d81a8b4080baa64e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 26 13:17:45 np0005596060 podman[270196]: 2026-01-26 18:17:45.036025655 +0000 UTC m=+0.132637103 container start 457f438144d8dc8cf99e62b62e0e126af0e52e8447a57d81a8b4080baa64e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 13:17:45 np0005596060 podman[270196]: 2026-01-26 18:17:45.041334998 +0000 UTC m=+0.137946446 container attach 457f438144d8dc8cf99e62b62e0e126af0e52e8447a57d81a8b4080baa64e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:17:45 np0005596060 strange_galileo[270212]: 167 167
Jan 26 13:17:45 np0005596060 systemd[1]: libpod-457f438144d8dc8cf99e62b62e0e126af0e52e8447a57d81a8b4080baa64e259.scope: Deactivated successfully.
Jan 26 13:17:45 np0005596060 podman[270196]: 2026-01-26 18:17:45.043265857 +0000 UTC m=+0.139877315 container died 457f438144d8dc8cf99e62b62e0e126af0e52e8447a57d81a8b4080baa64e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:17:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-db142027aef722d5ae49fb3f15bded67933e2c8809a0d137f1fadc872af490dc-merged.mount: Deactivated successfully.
Jan 26 13:17:45 np0005596060 podman[270196]: 2026-01-26 18:17:45.081851796 +0000 UTC m=+0.178463254 container remove 457f438144d8dc8cf99e62b62e0e126af0e52e8447a57d81a8b4080baa64e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:17:45 np0005596060 systemd[1]: libpod-conmon-457f438144d8dc8cf99e62b62e0e126af0e52e8447a57d81a8b4080baa64e259.scope: Deactivated successfully.
Jan 26 13:17:45 np0005596060 podman[270236]: 2026-01-26 18:17:45.241733612 +0000 UTC m=+0.040919529 container create 3a0b4bd0448bc5eae77a0226f1eb9e183899aa3c57455be0db09c5097ebc6c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 13:17:45 np0005596060 systemd[1]: Started libpod-conmon-3a0b4bd0448bc5eae77a0226f1eb9e183899aa3c57455be0db09c5097ebc6c17.scope.
Jan 26 13:17:45 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:17:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950085626cf7b3e8a8c497559b4b8140e60175bd0053df0fbf721effaccdb098/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950085626cf7b3e8a8c497559b4b8140e60175bd0053df0fbf721effaccdb098/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950085626cf7b3e8a8c497559b4b8140e60175bd0053df0fbf721effaccdb098/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/950085626cf7b3e8a8c497559b4b8140e60175bd0053df0fbf721effaccdb098/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:17:45 np0005596060 podman[270236]: 2026-01-26 18:17:45.225262059 +0000 UTC m=+0.024448006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:17:45 np0005596060 podman[270236]: 2026-01-26 18:17:45.333678821 +0000 UTC m=+0.132864748 container init 3a0b4bd0448bc5eae77a0226f1eb9e183899aa3c57455be0db09c5097ebc6c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:17:45 np0005596060 podman[270236]: 2026-01-26 18:17:45.343064527 +0000 UTC m=+0.142250434 container start 3a0b4bd0448bc5eae77a0226f1eb9e183899aa3c57455be0db09c5097ebc6c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:17:45 np0005596060 podman[270236]: 2026-01-26 18:17:45.34676585 +0000 UTC m=+0.145951827 container attach 3a0b4bd0448bc5eae77a0226f1eb9e183899aa3c57455be0db09c5097ebc6c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:17:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:45.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:46 np0005596060 naughty_williams[270252]: {
Jan 26 13:17:46 np0005596060 naughty_williams[270252]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:17:46 np0005596060 naughty_williams[270252]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:17:46 np0005596060 naughty_williams[270252]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:17:46 np0005596060 naughty_williams[270252]:        "osd_id": 1,
Jan 26 13:17:46 np0005596060 naughty_williams[270252]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:17:46 np0005596060 naughty_williams[270252]:        "type": "bluestore"
Jan 26 13:17:46 np0005596060 naughty_williams[270252]:    }
Jan 26 13:17:46 np0005596060 naughty_williams[270252]: }
Jan 26 13:17:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 403 KiB/s rd, 685 KiB/s wr, 55 op/s
Jan 26 13:17:46 np0005596060 systemd[1]: libpod-3a0b4bd0448bc5eae77a0226f1eb9e183899aa3c57455be0db09c5097ebc6c17.scope: Deactivated successfully.
Jan 26 13:17:46 np0005596060 podman[270236]: 2026-01-26 18:17:46.241513107 +0000 UTC m=+1.040699024 container died 3a0b4bd0448bc5eae77a0226f1eb9e183899aa3c57455be0db09c5097ebc6c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:17:46 np0005596060 systemd[1]: var-lib-containers-storage-overlay-950085626cf7b3e8a8c497559b4b8140e60175bd0053df0fbf721effaccdb098-merged.mount: Deactivated successfully.
Jan 26 13:17:46 np0005596060 podman[270236]: 2026-01-26 18:17:46.300213261 +0000 UTC m=+1.099399178 container remove 3a0b4bd0448bc5eae77a0226f1eb9e183899aa3c57455be0db09c5097ebc6c17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_williams, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 13:17:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:46 np0005596060 systemd[1]: libpod-conmon-3a0b4bd0448bc5eae77a0226f1eb9e183899aa3c57455be0db09c5097ebc6c17.scope: Deactivated successfully.
Jan 26 13:17:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:46.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:17:46 np0005596060 nova_compute[247421]: 2026-01-26 18:17:46.376 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:17:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:17:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:17:46 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7a7b4cde-5bbb-495e-8735-2bad8ae709b6 does not exist
Jan 26 13:17:46 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 340f9f5c-1fd2-4fc5-b04b-e5af1a74af6b does not exist
Jan 26 13:17:46 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4e6afe64-621b-4af0-93cb-895c31911e60 does not exist
Jan 26 13:17:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:17:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:17:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:17:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:47.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:17:47 np0005596060 nova_compute[247421]: 2026-01-26 18:17:47.539 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 126 op/s
Jan 26 13:17:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:17:48.272 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:17:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:17:48.273 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:17:48 np0005596060 nova_compute[247421]: 2026-01-26 18:17:48.273 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:48.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:49.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 126 op/s
Jan 26 13:17:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:50.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:50 np0005596060 podman[270337]: 2026-01-26 18:17:50.811872627 +0000 UTC m=+0.068665046 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 26 13:17:50 np0005596060 podman[270338]: 2026-01-26 18:17:50.882126612 +0000 UTC m=+0.131789092 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 13:17:51 np0005596060 nova_compute[247421]: 2026-01-26 18:17:51.377 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:51.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 KiB/s wr, 112 op/s
Jan 26 13:17:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:17:52.275 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:17:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:52.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:52 np0005596060 nova_compute[247421]: 2026-01-26 18:17:52.541 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:53.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.3 KiB/s wr, 86 op/s
Jan 26 13:17:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:54.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:55.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 79 op/s
Jan 26 13:17:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:56.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:56 np0005596060 nova_compute[247421]: 2026-01-26 18:17:56.379 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:57.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:17:57 np0005596060 nova_compute[247421]: 2026-01-26 18:17:57.543 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:17:57 np0005596060 nova_compute[247421]: 2026-01-26 18:17:57.671 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:17:57 np0005596060 nova_compute[247421]: 2026-01-26 18:17:57.671 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:17:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:17:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.2 KiB/s wr, 79 op/s
Jan 26 13:17:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:17:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:17:58.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:17:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:17:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:17:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:17:59.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:01 np0005596060 nova_compute[247421]: 2026-01-26 18:18:01.381 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:01.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:01 np0005596060 nova_compute[247421]: 2026-01-26 18:18:01.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:18:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:02.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:02 np0005596060 nova_compute[247421]: 2026-01-26 18:18:02.606 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:02 np0005596060 nova_compute[247421]: 2026-01-26 18:18:02.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:18:02 np0005596060 nova_compute[247421]: 2026-01-26 18:18:02.649 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:18:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:03.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:03 np0005596060 nova_compute[247421]: 2026-01-26 18:18:03.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:18:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:18:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:04.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:04 np0005596060 nova_compute[247421]: 2026-01-26 18:18:04.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:18:04 np0005596060 nova_compute[247421]: 2026-01-26 18:18:04.758 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:18:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:05.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:05 np0005596060 nova_compute[247421]: 2026-01-26 18:18:05.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:18:05 np0005596060 nova_compute[247421]: 2026-01-26 18:18:05.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:18:05 np0005596060 nova_compute[247421]: 2026-01-26 18:18:05.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:18:05 np0005596060 nova_compute[247421]: 2026-01-26 18:18:05.669 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:18:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:06.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:06 np0005596060 nova_compute[247421]: 2026-01-26 18:18:06.383 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:07.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:07 np0005596060 nova_compute[247421]: 2026-01-26 18:18:07.608 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:07 np0005596060 nova_compute[247421]: 2026-01-26 18:18:07.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:18:07 np0005596060 nova_compute[247421]: 2026-01-26 18:18:07.701 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:18:07 np0005596060 nova_compute[247421]: 2026-01-26 18:18:07.701 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:18:07 np0005596060 nova_compute[247421]: 2026-01-26 18:18:07.702 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:18:07 np0005596060 nova_compute[247421]: 2026-01-26 18:18:07.702 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:18:07 np0005596060 nova_compute[247421]: 2026-01-26 18:18:07.702 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:18:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:18:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4173420574' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:18:08 np0005596060 nova_compute[247421]: 2026-01-26 18:18:08.181 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:18:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:08 np0005596060 nova_compute[247421]: 2026-01-26 18:18:08.321 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:18:08 np0005596060 nova_compute[247421]: 2026-01-26 18:18:08.322 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4846MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:18:08 np0005596060 nova_compute[247421]: 2026-01-26 18:18:08.322 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:18:08 np0005596060 nova_compute[247421]: 2026-01-26 18:18:08.322 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:18:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:08.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:08 np0005596060 nova_compute[247421]: 2026-01-26 18:18:08.379 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:18:08 np0005596060 nova_compute[247421]: 2026-01-26 18:18:08.379 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:18:08 np0005596060 nova_compute[247421]: 2026-01-26 18:18:08.598 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:18:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:18:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/800257587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:18:09 np0005596060 nova_compute[247421]: 2026-01-26 18:18:09.054 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:18:09 np0005596060 nova_compute[247421]: 2026-01-26 18:18:09.060 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:18:09 np0005596060 nova_compute[247421]: 2026-01-26 18:18:09.075 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:18:09 np0005596060 nova_compute[247421]: 2026-01-26 18:18:09.077 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:18:09 np0005596060 nova_compute[247421]: 2026-01-26 18:18:09.078 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:18:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:09.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:10.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:11 np0005596060 nova_compute[247421]: 2026-01-26 18:18:11.384 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:11.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:12 np0005596060 nova_compute[247421]: 2026-01-26 18:18:12.078 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:18:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:12.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:12 np0005596060 nova_compute[247421]: 2026-01-26 18:18:12.610 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:13.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:18:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:18:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:18:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:18:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:18:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:18:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:14.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:18:14.748 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:18:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:18:14.750 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:18:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:18:14.751 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:18:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:15.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:16.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:16 np0005596060 nova_compute[247421]: 2026-01-26 18:18:16.388 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:17.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:17 np0005596060 nova_compute[247421]: 2026-01-26 18:18:17.613 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:18.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:19.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:20.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:21 np0005596060 nova_compute[247421]: 2026-01-26 18:18:21.390 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:21.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:21 np0005596060 podman[270541]: 2026-01-26 18:18:21.81363173 +0000 UTC m=+0.069846335 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:18:21 np0005596060 podman[270542]: 2026-01-26 18:18:21.886003818 +0000 UTC m=+0.132282304 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 26 13:18:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:22.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:22 np0005596060 nova_compute[247421]: 2026-01-26 18:18:22.615 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:23.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:24.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:25.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:26 np0005596060 nova_compute[247421]: 2026-01-26 18:18:26.394 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:27 np0005596060 nova_compute[247421]: 2026-01-26 18:18:27.618 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:29.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:30.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:31 np0005596060 nova_compute[247421]: 2026-01-26 18:18:31.396 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:31.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.487379) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451511487449, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1985, "num_deletes": 255, "total_data_size": 3483968, "memory_usage": 3540512, "flush_reason": "Manual Compaction"}
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451511513474, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3419225, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27261, "largest_seqno": 29245, "table_properties": {"data_size": 3410174, "index_size": 5673, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18812, "raw_average_key_size": 20, "raw_value_size": 3392033, "raw_average_value_size": 3707, "num_data_blocks": 248, "num_entries": 915, "num_filter_entries": 915, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769451318, "oldest_key_time": 1769451318, "file_creation_time": 1769451511, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 26167 microseconds, and 15086 cpu microseconds.
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.513541) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3419225 bytes OK
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.513571) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.516370) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.516388) EVENT_LOG_v1 {"time_micros": 1769451511516382, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.516407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3475770, prev total WAL file size 3475770, number of live WAL files 2.
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.517560) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3339KB)], [62(7853KB)]
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451511517594, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 11460850, "oldest_snapshot_seqno": -1}
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5412 keys, 9458408 bytes, temperature: kUnknown
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451511597518, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 9458408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9421395, "index_size": 22375, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 138195, "raw_average_key_size": 25, "raw_value_size": 9322878, "raw_average_value_size": 1722, "num_data_blocks": 905, "num_entries": 5412, "num_filter_entries": 5412, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769451511, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.597771) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 9458408 bytes
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.599194) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.2 rd, 118.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.7 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5937, records dropped: 525 output_compression: NoCompression
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.599215) EVENT_LOG_v1 {"time_micros": 1769451511599206, "job": 34, "event": "compaction_finished", "compaction_time_micros": 80011, "compaction_time_cpu_micros": 25061, "output_level": 6, "num_output_files": 1, "total_output_size": 9458408, "num_input_records": 5937, "num_output_records": 5412, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451511599962, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451511601828, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.517458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.601906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.601910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.601912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.601914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:18:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:18:31.601916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:18:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:32 np0005596060 nova_compute[247421]: 2026-01-26 18:18:32.621 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:18:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:18:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:34.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:35.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:36.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:36 np0005596060 nova_compute[247421]: 2026-01-26 18:18:36.399 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:37.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:37 np0005596060 nova_compute[247421]: 2026-01-26 18:18:37.624 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:38.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:39.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:18:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1896265319' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:18:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:18:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1896265319' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:18:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:40.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:41 np0005596060 nova_compute[247421]: 2026-01-26 18:18:41.401 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:41.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:42.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:42 np0005596060 nova_compute[247421]: 2026-01-26 18:18:42.627 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:43.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:43 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:18:43.595 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:18:43 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:18:43.597 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:18:43 np0005596060 nova_compute[247421]: 2026-01-26 18:18:43.596 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:18:44
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'backups', '.mgr', 'volumes', 'default.rgw.log', '.rgw.root', 'vms', 'images', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:44.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:18:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:18:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:45.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:46.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:46 np0005596060 nova_compute[247421]: 2026-01-26 18:18:46.403 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:47 np0005596060 nova_compute[247421]: 2026-01-26 18:18:47.629 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:48.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:48 np0005596060 podman[270822]: 2026-01-26 18:18:48.22404652 +0000 UTC m=+0.059894286 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 13:18:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:18:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:18:48 np0005596060 podman[270822]: 2026-01-26 18:18:48.320580495 +0000 UTC m=+0.156428261 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:18:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:48.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:48 np0005596060 podman[270978]: 2026-01-26 18:18:48.964498861 +0000 UTC m=+0.053978457 container exec e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 13:18:48 np0005596060 podman[270978]: 2026-01-26 18:18:48.975485037 +0000 UTC m=+0.064964613 container exec_died e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 13:18:49 np0005596060 podman[271044]: 2026-01-26 18:18:49.183107672 +0000 UTC m=+0.064049930 container exec 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, io.buildah.version=1.28.2, name=keepalived, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., description=keepalived for Ceph, version=2.2.4, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9)
Jan 26 13:18:49 np0005596060 podman[271044]: 2026-01-26 18:18:49.193929874 +0000 UTC m=+0.074872112 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vendor=Red Hat, Inc., description=keepalived for Ceph, name=keepalived, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, distribution-scope=public)
Jan 26 13:18:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:18:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:18:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:18:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:50.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:50 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 327d18dc-294a-4242-9eb7-c172b0dd1660 does not exist
Jan 26 13:18:50 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev c0c487de-d0ac-42b2-8fcc-1880a9df2c36 does not exist
Jan 26 13:18:50 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d655383c-f19d-41e5-9a63-38fd7022c22f does not exist
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:18:50 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:18:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:50.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:18:50.598 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:18:50 np0005596060 podman[271348]: 2026-01-26 18:18:50.942631523 +0000 UTC m=+0.040874828 container create 1202f18099cf9916ca2f2bacadf61a4b866ab2efeed1173fc127ade9dd361269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:18:50 np0005596060 systemd[1]: Started libpod-conmon-1202f18099cf9916ca2f2bacadf61a4b866ab2efeed1173fc127ade9dd361269.scope.
Jan 26 13:18:51 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:18:51 np0005596060 podman[271348]: 2026-01-26 18:18:50.923742808 +0000 UTC m=+0.021986103 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:18:51 np0005596060 podman[271348]: 2026-01-26 18:18:51.027104305 +0000 UTC m=+0.125347660 container init 1202f18099cf9916ca2f2bacadf61a4b866ab2efeed1173fc127ade9dd361269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:18:51 np0005596060 podman[271348]: 2026-01-26 18:18:51.037984518 +0000 UTC m=+0.136227783 container start 1202f18099cf9916ca2f2bacadf61a4b866ab2efeed1173fc127ade9dd361269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 13:18:51 np0005596060 sleepy_clarke[271364]: 167 167
Jan 26 13:18:51 np0005596060 systemd[1]: libpod-1202f18099cf9916ca2f2bacadf61a4b866ab2efeed1173fc127ade9dd361269.scope: Deactivated successfully.
Jan 26 13:18:51 np0005596060 conmon[271364]: conmon 1202f18099cf9916ca2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1202f18099cf9916ca2f2bacadf61a4b866ab2efeed1173fc127ade9dd361269.scope/container/memory.events
Jan 26 13:18:51 np0005596060 podman[271348]: 2026-01-26 18:18:51.045137118 +0000 UTC m=+0.143380383 container attach 1202f18099cf9916ca2f2bacadf61a4b866ab2efeed1173fc127ade9dd361269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 13:18:51 np0005596060 podman[271348]: 2026-01-26 18:18:51.045689762 +0000 UTC m=+0.143933027 container died 1202f18099cf9916ca2f2bacadf61a4b866ab2efeed1173fc127ade9dd361269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:18:51 np0005596060 systemd[1]: var-lib-containers-storage-overlay-57888e756e15c3d104ad5d185e0a07104acfd9d6fa79a64d487b8ebb749a8097-merged.mount: Deactivated successfully.
Jan 26 13:18:51 np0005596060 podman[271348]: 2026-01-26 18:18:51.091323768 +0000 UTC m=+0.189567023 container remove 1202f18099cf9916ca2f2bacadf61a4b866ab2efeed1173fc127ade9dd361269 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:18:51 np0005596060 systemd[1]: libpod-conmon-1202f18099cf9916ca2f2bacadf61a4b866ab2efeed1173fc127ade9dd361269.scope: Deactivated successfully.
Jan 26 13:18:51 np0005596060 podman[271387]: 2026-01-26 18:18:51.246521287 +0000 UTC m=+0.038825367 container create 90ec4a63af555813fd818ad9cfd9b7a069ce1ad8350550dec26a0a0a4443f7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:18:51 np0005596060 systemd[1]: Started libpod-conmon-90ec4a63af555813fd818ad9cfd9b7a069ce1ad8350550dec26a0a0a4443f7a4.scope.
Jan 26 13:18:51 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:18:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcf1ff3dc07a510abd7630c298548f41382964a71f2d6d012e6d52d62533a219/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcf1ff3dc07a510abd7630c298548f41382964a71f2d6d012e6d52d62533a219/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcf1ff3dc07a510abd7630c298548f41382964a71f2d6d012e6d52d62533a219/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcf1ff3dc07a510abd7630c298548f41382964a71f2d6d012e6d52d62533a219/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcf1ff3dc07a510abd7630c298548f41382964a71f2d6d012e6d52d62533a219/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:51 np0005596060 podman[271387]: 2026-01-26 18:18:51.313557501 +0000 UTC m=+0.105861581 container init 90ec4a63af555813fd818ad9cfd9b7a069ce1ad8350550dec26a0a0a4443f7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:18:51 np0005596060 podman[271387]: 2026-01-26 18:18:51.229046548 +0000 UTC m=+0.021350648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:18:51 np0005596060 podman[271387]: 2026-01-26 18:18:51.325511021 +0000 UTC m=+0.117815101 container start 90ec4a63af555813fd818ad9cfd9b7a069ce1ad8350550dec26a0a0a4443f7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:18:51 np0005596060 podman[271387]: 2026-01-26 18:18:51.329413529 +0000 UTC m=+0.121717609 container attach 90ec4a63af555813fd818ad9cfd9b7a069ce1ad8350550dec26a0a0a4443f7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_johnson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:18:51 np0005596060 nova_compute[247421]: 2026-01-26 18:18:51.407 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:18:52 np0005596060 bold_johnson[271404]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:18:52 np0005596060 bold_johnson[271404]: --> relative data size: 1.0
Jan 26 13:18:52 np0005596060 bold_johnson[271404]: --> All data devices are unavailable
Jan 26 13:18:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:18:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:52.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:18:52 np0005596060 systemd[1]: libpod-90ec4a63af555813fd818ad9cfd9b7a069ce1ad8350550dec26a0a0a4443f7a4.scope: Deactivated successfully.
Jan 26 13:18:52 np0005596060 podman[271387]: 2026-01-26 18:18:52.128895603 +0000 UTC m=+0.921199703 container died 90ec4a63af555813fd818ad9cfd9b7a069ce1ad8350550dec26a0a0a4443f7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:18:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:52.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:52 np0005596060 nova_compute[247421]: 2026-01-26 18:18:52.633 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:53 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fcf1ff3dc07a510abd7630c298548f41382964a71f2d6d012e6d52d62533a219-merged.mount: Deactivated successfully.
Jan 26 13:18:53 np0005596060 podman[271387]: 2026-01-26 18:18:53.083780791 +0000 UTC m=+1.876084921 container remove 90ec4a63af555813fd818ad9cfd9b7a069ce1ad8350550dec26a0a0a4443f7a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_johnson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:18:53 np0005596060 systemd[1]: libpod-conmon-90ec4a63af555813fd818ad9cfd9b7a069ce1ad8350550dec26a0a0a4443f7a4.scope: Deactivated successfully.
Jan 26 13:18:53 np0005596060 podman[271421]: 2026-01-26 18:18:53.155077652 +0000 UTC m=+0.993337535 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 13:18:53 np0005596060 podman[271428]: 2026-01-26 18:18:53.236955078 +0000 UTC m=+1.074495943 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 13:18:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:53 np0005596060 podman[271669]: 2026-01-26 18:18:53.729745727 +0000 UTC m=+0.039480303 container create 612d7662de01d1bf0eaefd9eecf95ba4925387738adcf0dcc2fcd43ff6537bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:18:53 np0005596060 systemd[1]: Started libpod-conmon-612d7662de01d1bf0eaefd9eecf95ba4925387738adcf0dcc2fcd43ff6537bf0.scope.
Jan 26 13:18:53 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:18:53 np0005596060 podman[271669]: 2026-01-26 18:18:53.712473893 +0000 UTC m=+0.022208489 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:18:53 np0005596060 podman[271669]: 2026-01-26 18:18:53.826285382 +0000 UTC m=+0.136019978 container init 612d7662de01d1bf0eaefd9eecf95ba4925387738adcf0dcc2fcd43ff6537bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:18:53 np0005596060 podman[271669]: 2026-01-26 18:18:53.835592546 +0000 UTC m=+0.145327142 container start 612d7662de01d1bf0eaefd9eecf95ba4925387738adcf0dcc2fcd43ff6537bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 13:18:53 np0005596060 podman[271669]: 2026-01-26 18:18:53.839423562 +0000 UTC m=+0.149158138 container attach 612d7662de01d1bf0eaefd9eecf95ba4925387738adcf0dcc2fcd43ff6537bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 13:18:53 np0005596060 sleepy_dewdney[271685]: 167 167
Jan 26 13:18:53 np0005596060 systemd[1]: libpod-612d7662de01d1bf0eaefd9eecf95ba4925387738adcf0dcc2fcd43ff6537bf0.scope: Deactivated successfully.
Jan 26 13:18:53 np0005596060 podman[271669]: 2026-01-26 18:18:53.843671309 +0000 UTC m=+0.153405885 container died 612d7662de01d1bf0eaefd9eecf95ba4925387738adcf0dcc2fcd43ff6537bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:18:53 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d191d746401ae7ff999cb9c65ad453c766b2394ea728653921271c08571c7c60-merged.mount: Deactivated successfully.
Jan 26 13:18:53 np0005596060 podman[271669]: 2026-01-26 18:18:53.891025288 +0000 UTC m=+0.200759854 container remove 612d7662de01d1bf0eaefd9eecf95ba4925387738adcf0dcc2fcd43ff6537bf0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_dewdney, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:18:53 np0005596060 systemd[1]: libpod-conmon-612d7662de01d1bf0eaefd9eecf95ba4925387738adcf0dcc2fcd43ff6537bf0.scope: Deactivated successfully.
Jan 26 13:18:54 np0005596060 podman[271709]: 2026-01-26 18:18:54.048507794 +0000 UTC m=+0.043148594 container create b4395c72b6694b1d3af1916e4e38f49e7deda2347024e8d609411669e521a818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:18:54 np0005596060 systemd[1]: Started libpod-conmon-b4395c72b6694b1d3af1916e4e38f49e7deda2347024e8d609411669e521a818.scope.
Jan 26 13:18:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:54.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:54 np0005596060 podman[271709]: 2026-01-26 18:18:54.029583009 +0000 UTC m=+0.024223839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:18:54 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:18:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d687cf877d1e6cda68b3d347e3c9e635f6cac3ceb10d98597500877dec60587/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d687cf877d1e6cda68b3d347e3c9e635f6cac3ceb10d98597500877dec60587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d687cf877d1e6cda68b3d347e3c9e635f6cac3ceb10d98597500877dec60587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d687cf877d1e6cda68b3d347e3c9e635f6cac3ceb10d98597500877dec60587/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:54 np0005596060 podman[271709]: 2026-01-26 18:18:54.148535447 +0000 UTC m=+0.143176247 container init b4395c72b6694b1d3af1916e4e38f49e7deda2347024e8d609411669e521a818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:18:54 np0005596060 podman[271709]: 2026-01-26 18:18:54.162401136 +0000 UTC m=+0.157041936 container start b4395c72b6694b1d3af1916e4e38f49e7deda2347024e8d609411669e521a818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 13:18:54 np0005596060 podman[271709]: 2026-01-26 18:18:54.166625292 +0000 UTC m=+0.161266102 container attach b4395c72b6694b1d3af1916e4e38f49e7deda2347024e8d609411669e521a818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:18:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:54.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:18:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6693 writes, 29K keys, 6689 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6693 writes, 6689 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1571 writes, 6770 keys, 1567 commit groups, 1.0 writes per commit group, ingest: 10.80 MB, 0.02 MB/s#012Interval WAL: 1571 writes, 1567 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.8      3.03              0.15        17    0.178       0      0       0.0       0.0#012  L6      1/0    9.02 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.4     49.8     40.7      3.26              0.45        16    0.204     79K   8944       0.0       0.0#012 Sum      1/0    9.02 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.4     25.8     27.2      6.28              0.61        33    0.190     79K   8944       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.5     31.8     32.9      1.30              0.16         8    0.162     23K   2510       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     49.8     40.7      3.26              0.45        16    0.204     79K   8944       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.7      3.03              0.15        16    0.189       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.038, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.07 MB/s write, 0.16 GB read, 0.07 MB/s read, 6.3 seconds#012Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5652937211f0#2 capacity: 304.00 MB usage: 18.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000155 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1046,17.50 MB,5.75537%) FilterBlock(34,228.92 KB,0.0735383%) IndexBlock(34,417.02 KB,0.133961%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 26 13:18:54 np0005596060 determined_benz[271725]: {
Jan 26 13:18:54 np0005596060 determined_benz[271725]:    "1": [
Jan 26 13:18:54 np0005596060 determined_benz[271725]:        {
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "devices": [
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "/dev/loop3"
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            ],
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "lv_name": "ceph_lv0",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "lv_size": "7511998464",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "name": "ceph_lv0",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "tags": {
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.cluster_name": "ceph",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.crush_device_class": "",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.encrypted": "0",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.osd_id": "1",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.type": "block",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:                "ceph.vdo": "0"
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            },
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "type": "block",
Jan 26 13:18:54 np0005596060 determined_benz[271725]:            "vg_name": "ceph_vg0"
Jan 26 13:18:54 np0005596060 determined_benz[271725]:        }
Jan 26 13:18:54 np0005596060 determined_benz[271725]:    ]
Jan 26 13:18:54 np0005596060 determined_benz[271725]: }
Jan 26 13:18:54 np0005596060 systemd[1]: libpod-b4395c72b6694b1d3af1916e4e38f49e7deda2347024e8d609411669e521a818.scope: Deactivated successfully.
Jan 26 13:18:54 np0005596060 podman[271709]: 2026-01-26 18:18:54.986442926 +0000 UTC m=+0.981083726 container died b4395c72b6694b1d3af1916e4e38f49e7deda2347024e8d609411669e521a818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:18:55 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3d687cf877d1e6cda68b3d347e3c9e635f6cac3ceb10d98597500877dec60587-merged.mount: Deactivated successfully.
Jan 26 13:18:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:56.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:56 np0005596060 podman[271709]: 2026-01-26 18:18:56.128948927 +0000 UTC m=+2.123589747 container remove b4395c72b6694b1d3af1916e4e38f49e7deda2347024e8d609411669e521a818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_benz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 13:18:56 np0005596060 systemd[1]: libpod-conmon-b4395c72b6694b1d3af1916e4e38f49e7deda2347024e8d609411669e521a818.scope: Deactivated successfully.
Jan 26 13:18:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 13:18:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:56.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 13:18:56 np0005596060 nova_compute[247421]: 2026-01-26 18:18:56.409 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:56 np0005596060 podman[271888]: 2026-01-26 18:18:56.827068674 +0000 UTC m=+0.028857936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:18:57 np0005596060 nova_compute[247421]: 2026-01-26 18:18:57.636 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:18:57 np0005596060 podman[271888]: 2026-01-26 18:18:57.90610586 +0000 UTC m=+1.107895052 container create c41c1dffa5b9cc924f39c871c3658aec9b69af08986bd9673b543e25a061b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:18:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:18:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:18:58.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:18:58 np0005596060 systemd[1]: Started libpod-conmon-c41c1dffa5b9cc924f39c871c3658aec9b69af08986bd9673b543e25a061b650.scope.
Jan 26 13:18:58 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:18:58 np0005596060 podman[271888]: 2026-01-26 18:18:58.197843538 +0000 UTC m=+1.399632720 container init c41c1dffa5b9cc924f39c871c3658aec9b69af08986bd9673b543e25a061b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 26 13:18:58 np0005596060 podman[271888]: 2026-01-26 18:18:58.205138972 +0000 UTC m=+1.406928144 container start c41c1dffa5b9cc924f39c871c3658aec9b69af08986bd9673b543e25a061b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 13:18:58 np0005596060 podman[271888]: 2026-01-26 18:18:58.209411679 +0000 UTC m=+1.411200851 container attach c41c1dffa5b9cc924f39c871c3658aec9b69af08986bd9673b543e25a061b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:18:58 np0005596060 eloquent_kilby[271905]: 167 167
Jan 26 13:18:58 np0005596060 systemd[1]: libpod-c41c1dffa5b9cc924f39c871c3658aec9b69af08986bd9673b543e25a061b650.scope: Deactivated successfully.
Jan 26 13:18:58 np0005596060 podman[271888]: 2026-01-26 18:18:58.212710272 +0000 UTC m=+1.414499444 container died c41c1dffa5b9cc924f39c871c3658aec9b69af08986bd9673b543e25a061b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:18:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:18:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:18:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:18:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:18:58.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:18:58 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7d58341d79b9b61ac3623b40a74c9d073df841ceaf7d2733802cfa3713cb700d-merged.mount: Deactivated successfully.
Jan 26 13:18:58 np0005596060 podman[271888]: 2026-01-26 18:18:58.419085106 +0000 UTC m=+1.620874298 container remove c41c1dffa5b9cc924f39c871c3658aec9b69af08986bd9673b543e25a061b650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:18:58 np0005596060 systemd[1]: libpod-conmon-c41c1dffa5b9cc924f39c871c3658aec9b69af08986bd9673b543e25a061b650.scope: Deactivated successfully.
Jan 26 13:18:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:18:58 np0005596060 podman[271929]: 2026-01-26 18:18:58.60036612 +0000 UTC m=+0.044909279 container create 5812131d086812345f877727f4107ba571ea16a4f4182afac50779e23afc566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:18:58 np0005596060 systemd[1]: Started libpod-conmon-5812131d086812345f877727f4107ba571ea16a4f4182afac50779e23afc566d.scope.
Jan 26 13:18:58 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:18:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fe557bd0bffa418b1223208e0443158e2dc8a4c80e70376379eef434569af0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fe557bd0bffa418b1223208e0443158e2dc8a4c80e70376379eef434569af0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fe557bd0bffa418b1223208e0443158e2dc8a4c80e70376379eef434569af0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fe557bd0bffa418b1223208e0443158e2dc8a4c80e70376379eef434569af0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:18:58 np0005596060 podman[271929]: 2026-01-26 18:18:58.581482936 +0000 UTC m=+0.026026145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:18:58 np0005596060 podman[271929]: 2026-01-26 18:18:58.678940474 +0000 UTC m=+0.123483643 container init 5812131d086812345f877727f4107ba571ea16a4f4182afac50779e23afc566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:18:58 np0005596060 podman[271929]: 2026-01-26 18:18:58.685272293 +0000 UTC m=+0.129815452 container start 5812131d086812345f877727f4107ba571ea16a4f4182afac50779e23afc566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 13:18:58 np0005596060 podman[271929]: 2026-01-26 18:18:58.689327135 +0000 UTC m=+0.133870314 container attach 5812131d086812345f877727f4107ba571ea16a4f4182afac50779e23afc566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:18:59 np0005596060 hopeful_bouman[271945]: {
Jan 26 13:18:59 np0005596060 hopeful_bouman[271945]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:18:59 np0005596060 hopeful_bouman[271945]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:18:59 np0005596060 hopeful_bouman[271945]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:18:59 np0005596060 hopeful_bouman[271945]:        "osd_id": 1,
Jan 26 13:18:59 np0005596060 hopeful_bouman[271945]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:18:59 np0005596060 hopeful_bouman[271945]:        "type": "bluestore"
Jan 26 13:18:59 np0005596060 hopeful_bouman[271945]:    }
Jan 26 13:18:59 np0005596060 hopeful_bouman[271945]: }
Jan 26 13:18:59 np0005596060 systemd[1]: libpod-5812131d086812345f877727f4107ba571ea16a4f4182afac50779e23afc566d.scope: Deactivated successfully.
Jan 26 13:18:59 np0005596060 podman[271929]: 2026-01-26 18:18:59.521624183 +0000 UTC m=+0.966167342 container died 5812131d086812345f877727f4107ba571ea16a4f4182afac50779e23afc566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:18:59 np0005596060 systemd[1]: var-lib-containers-storage-overlay-15fe557bd0bffa418b1223208e0443158e2dc8a4c80e70376379eef434569af0-merged.mount: Deactivated successfully.
Jan 26 13:18:59 np0005596060 podman[271929]: 2026-01-26 18:18:59.574960493 +0000 UTC m=+1.019503662 container remove 5812131d086812345f877727f4107ba571ea16a4f4182afac50779e23afc566d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_bouman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:18:59 np0005596060 systemd[1]: libpod-conmon-5812131d086812345f877727f4107ba571ea16a4f4182afac50779e23afc566d.scope: Deactivated successfully.
Jan 26 13:18:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:18:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:18:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:18:59 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev de039a0e-d72f-4b61-a179-81070226926d does not exist
Jan 26 13:18:59 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3d74018b-5ad3-458e-b820-17cf9013a98c does not exist
Jan 26 13:18:59 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 0c689223-e18e-47a6-aa62-f01f03e3e629 does not exist
Jan 26 13:18:59 np0005596060 nova_compute[247421]: 2026-01-26 18:18:59.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:18:59 np0005596060 nova_compute[247421]: 2026-01-26 18:18:59.653 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:19:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:00.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:00.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:19:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:19:01 np0005596060 nova_compute[247421]: 2026-01-26 18:19:01.411 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 26 13:19:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 26 13:19:01 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 26 13:19:01 np0005596060 nova_compute[247421]: 2026-01-26 18:19:01.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:19:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:02.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.1 KiB/s wr, 10 op/s
Jan 26 13:19:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:19:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:02.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:19:02 np0005596060 nova_compute[247421]: 2026-01-26 18:19:02.638 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:03 np0005596060 nova_compute[247421]: 2026-01-26 18:19:03.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5711064000558348 quantized to 32 (current 32)
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:19:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:19:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:04.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 7.2 KiB/s rd, 1.2 KiB/s wr, 10 op/s
Jan 26 13:19:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:04.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:04 np0005596060 nova_compute[247421]: 2026-01-26 18:19:04.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:19:04 np0005596060 nova_compute[247421]: 2026-01-26 18:19:04.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:19:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 26 13:19:05 np0005596060 nova_compute[247421]: 2026-01-26 18:19:05.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:19:05 np0005596060 nova_compute[247421]: 2026-01-26 18:19:05.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:19:05 np0005596060 nova_compute[247421]: 2026-01-26 18:19:05.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:19:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 26 13:19:05 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 26 13:19:05 np0005596060 nova_compute[247421]: 2026-01-26 18:19:05.782 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:19:05 np0005596060 nova_compute[247421]: 2026-01-26 18:19:05.783 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:19:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:06.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 41 MiB data, 268 MiB used, 21 GiB / 21 GiB avail; 9.0 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Jan 26 13:19:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:06.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:06 np0005596060 nova_compute[247421]: 2026-01-26 18:19:06.414 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:07 np0005596060 nova_compute[247421]: 2026-01-26 18:19:07.640 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:07 np0005596060 nova_compute[247421]: 2026-01-26 18:19:07.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:19:07 np0005596060 nova_compute[247421]: 2026-01-26 18:19:07.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:19:07 np0005596060 nova_compute[247421]: 2026-01-26 18:19:07.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:19:07 np0005596060 nova_compute[247421]: 2026-01-26 18:19:07.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:19:07 np0005596060 nova_compute[247421]: 2026-01-26 18:19:07.678 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:19:07 np0005596060 nova_compute[247421]: 2026-01-26 18:19:07.678 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:19:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 26 13:19:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 26 13:19:07 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 26 13:19:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:19:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2707943388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:19:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:19:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:08.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:19:08 np0005596060 nova_compute[247421]: 2026-01-26 18:19:08.131 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:19:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 3.2 KiB/s wr, 30 op/s
Jan 26 13:19:08 np0005596060 nova_compute[247421]: 2026-01-26 18:19:08.301 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:19:08 np0005596060 nova_compute[247421]: 2026-01-26 18:19:08.302 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4804MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:19:08 np0005596060 nova_compute[247421]: 2026-01-26 18:19:08.303 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:19:08 np0005596060 nova_compute[247421]: 2026-01-26 18:19:08.303 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:19:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:08.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:08 np0005596060 nova_compute[247421]: 2026-01-26 18:19:08.639 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:19:08 np0005596060 nova_compute[247421]: 2026-01-26 18:19:08.640 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:19:08 np0005596060 nova_compute[247421]: 2026-01-26 18:19:08.662 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:19:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:19:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268356138' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:19:09 np0005596060 nova_compute[247421]: 2026-01-26 18:19:09.135 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:19:09 np0005596060 nova_compute[247421]: 2026-01-26 18:19:09.141 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:19:09 np0005596060 nova_compute[247421]: 2026-01-26 18:19:09.163 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:19:09 np0005596060 nova_compute[247421]: 2026-01-26 18:19:09.165 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:19:09 np0005596060 nova_compute[247421]: 2026-01-26 18:19:09.165 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:19:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:10.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 2.6 KiB/s wr, 24 op/s
Jan 26 13:19:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:10.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:11 np0005596060 nova_compute[247421]: 2026-01-26 18:19:11.416 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:19:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:12.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:19:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 4.7 KiB/s wr, 43 op/s
Jan 26 13:19:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:12.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:12 np0005596060 nova_compute[247421]: 2026-01-26 18:19:12.642 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:19:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:19:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:19:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:19:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:19:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:19:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:14.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:14 np0005596060 nova_compute[247421]: 2026-01-26 18:19:14.166 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:19:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 4.7 KiB/s wr, 41 op/s
Jan 26 13:19:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:14.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:19:14.749 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:19:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:19:14.750 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:19:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:19:14.750 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:19:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 26 13:19:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 26 13:19:15 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 26 13:19:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:16.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.4 KiB/s wr, 19 op/s
Jan 26 13:19:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:16.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:16 np0005596060 nova_compute[247421]: 2026-01-26 18:19:16.417 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 26 13:19:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 26 13:19:17 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 26 13:19:17 np0005596060 nova_compute[247421]: 2026-01-26 18:19:17.645 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:18.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 5.0 KiB/s wr, 61 op/s
Jan 26 13:19:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:18.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 26 13:19:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 26 13:19:18 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 26 13:19:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:20.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 41 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Jan 26 13:19:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:20.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 26 13:19:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 26 13:19:20 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 26 13:19:21 np0005596060 nova_compute[247421]: 2026-01-26 18:19:21.419 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:19:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:22.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:19:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 107 KiB/s rd, 5.8 KiB/s wr, 143 op/s
Jan 26 13:19:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:22.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:22 np0005596060 nova_compute[247421]: 2026-01-26 18:19:22.647 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:23 np0005596060 podman[272136]: 2026-01-26 18:19:23.813919947 +0000 UTC m=+0.066723692 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 26 13:19:23 np0005596060 podman[272137]: 2026-01-26 18:19:23.851824723 +0000 UTC m=+0.103948561 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 13:19:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:24.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 3.4 KiB/s wr, 80 op/s
Jan 26 13:19:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:24.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:24 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:19:24.431 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:19:24 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:19:24.431 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:19:24 np0005596060 nova_compute[247421]: 2026-01-26 18:19:24.432 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:26.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 2.6 KiB/s wr, 69 op/s
Jan 26 13:19:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:19:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:19:26 np0005596060 nova_compute[247421]: 2026-01-26 18:19:26.420 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:27 np0005596060 nova_compute[247421]: 2026-01-26 18:19:27.650 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:19:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:28.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:19:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 2.5 KiB/s wr, 59 op/s
Jan 26 13:19:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:28.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 26 13:19:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 26 13:19:28 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 26 13:19:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:30.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 2.6 KiB/s wr, 59 op/s
Jan 26 13:19:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:30.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:31 np0005596060 nova_compute[247421]: 2026-01-26 18:19:31.422 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:19:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:32.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:19:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 26 13:19:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:32.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:32 np0005596060 nova_compute[247421]: 2026-01-26 18:19:32.653 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:34.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 409 B/s wr, 2 op/s
Jan 26 13:19:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:34.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:34 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:19:34.433 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:19:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:19:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:36.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:19:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 409 B/s wr, 2 op/s
Jan 26 13:19:36 np0005596060 nova_compute[247421]: 2026-01-26 18:19:36.426 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:36.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:37 np0005596060 nova_compute[247421]: 2026-01-26 18:19:37.655 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:19:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:38.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:19:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:38.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:40.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:19:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/520643765' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:19:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:19:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/520643765' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:19:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:41 np0005596060 nova_compute[247421]: 2026-01-26 18:19:41.426 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:42.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:42.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:42 np0005596060 nova_compute[247421]: 2026-01-26 18:19:42.731 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:19:44
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'default.rgw.meta', '.rgw.root', '.mgr']
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:19:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:19:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:44.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:44.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:19:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:19:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:46.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:46 np0005596060 nova_compute[247421]: 2026-01-26 18:19:46.428 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:46.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:47 np0005596060 nova_compute[247421]: 2026-01-26 18:19:47.734 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:48.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:48.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:50.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:19:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:50.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:19:51 np0005596060 nova_compute[247421]: 2026-01-26 18:19:51.430 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:19:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:52.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:19:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:52.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:52 np0005596060 nova_compute[247421]: 2026-01-26 18:19:52.736 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:19:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:19:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:54.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:19:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:54.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:54 np0005596060 podman[272292]: 2026-01-26 18:19:54.798361744 +0000 UTC m=+0.057553688 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:19:54 np0005596060 podman[272293]: 2026-01-26 18:19:54.838282722 +0000 UTC m=+0.091191556 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 26 13:19:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:56.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:56 np0005596060 nova_compute[247421]: 2026-01-26 18:19:56.431 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:56.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:57 np0005596060 nova_compute[247421]: 2026-01-26 18:19:57.738 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:19:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:19:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:19:58.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:19:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:19:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:19:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:19:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:19:58.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:19:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 13:20:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:00.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:20:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:00.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:20:00 np0005596060 ceph-mon[74267]: overall HEALTH_OK
Jan 26 13:20:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:20:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:20:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:20:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:20:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:20:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:20:01 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8bf63dc0-4ac3-487b-bf62-c194a073cdd2 does not exist
Jan 26 13:20:01 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 89ff4bf8-ab6d-420b-b104-676abd676dda does not exist
Jan 26 13:20:01 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 056e8646-74b5-46d6-9b39-2470bf041632 does not exist
Jan 26 13:20:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:20:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:20:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:20:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:20:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:20:01 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:20:01 np0005596060 nova_compute[247421]: 2026-01-26 18:20:01.433 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:01 np0005596060 nova_compute[247421]: 2026-01-26 18:20:01.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:20:01 np0005596060 nova_compute[247421]: 2026-01-26 18:20:01.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:20:01 np0005596060 nova_compute[247421]: 2026-01-26 18:20:01.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:20:01 np0005596060 podman[272609]: 2026-01-26 18:20:01.724361512 +0000 UTC m=+0.025730246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:20:01 np0005596060 podman[272609]: 2026-01-26 18:20:01.983653681 +0000 UTC m=+0.285022395 container create 154a0ef01028ec697add10ceec3a37faec31608e5bc6afb27c3de18e3f3963ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:20:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:20:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:20:01 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:20:02 np0005596060 systemd[1]: Started libpod-conmon-154a0ef01028ec697add10ceec3a37faec31608e5bc6afb27c3de18e3f3963ec.scope.
Jan 26 13:20:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:20:02 np0005596060 podman[272609]: 2026-01-26 18:20:02.162626824 +0000 UTC m=+0.463995558 container init 154a0ef01028ec697add10ceec3a37faec31608e5bc6afb27c3de18e3f3963ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:20:02 np0005596060 podman[272609]: 2026-01-26 18:20:02.170356972 +0000 UTC m=+0.471725686 container start 154a0ef01028ec697add10ceec3a37faec31608e5bc6afb27c3de18e3f3963ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldwasser, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:20:02 np0005596060 flamboyant_goldwasser[272626]: 167 167
Jan 26 13:20:02 np0005596060 systemd[1]: libpod-154a0ef01028ec697add10ceec3a37faec31608e5bc6afb27c3de18e3f3963ec.scope: Deactivated successfully.
Jan 26 13:20:02 np0005596060 podman[272609]: 2026-01-26 18:20:02.183368323 +0000 UTC m=+0.484737037 container attach 154a0ef01028ec697add10ceec3a37faec31608e5bc6afb27c3de18e3f3963ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:20:02 np0005596060 podman[272609]: 2026-01-26 18:20:02.183789904 +0000 UTC m=+0.485158618 container died 154a0ef01028ec697add10ceec3a37faec31608e5bc6afb27c3de18e3f3963ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:20:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:02.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0ab907b12d8bd07248db920d5e7753955ddf9b64637532f1b9c91e6fe7195f7d-merged.mount: Deactivated successfully.
Jan 26 13:20:02 np0005596060 podman[272609]: 2026-01-26 18:20:02.272606628 +0000 UTC m=+0.573975342 container remove 154a0ef01028ec697add10ceec3a37faec31608e5bc6afb27c3de18e3f3963ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:20:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:02 np0005596060 systemd[1]: libpod-conmon-154a0ef01028ec697add10ceec3a37faec31608e5bc6afb27c3de18e3f3963ec.scope: Deactivated successfully.
Jan 26 13:20:02 np0005596060 podman[272650]: 2026-01-26 18:20:02.44018989 +0000 UTC m=+0.045041229 container create 5d5163a95d212c2c91c5e226af9e7663d7c4a406ca526dacd6964854a767137b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 13:20:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:02.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:02 np0005596060 systemd[1]: Started libpod-conmon-5d5163a95d212c2c91c5e226af9e7663d7c4a406ca526dacd6964854a767137b.scope.
Jan 26 13:20:02 np0005596060 podman[272650]: 2026-01-26 18:20:02.418430866 +0000 UTC m=+0.023282205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:20:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:20:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415c3e5213385a10153c4d6a7eaa92aba8d5d7edade7de9815573553bcbb136d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415c3e5213385a10153c4d6a7eaa92aba8d5d7edade7de9815573553bcbb136d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415c3e5213385a10153c4d6a7eaa92aba8d5d7edade7de9815573553bcbb136d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415c3e5213385a10153c4d6a7eaa92aba8d5d7edade7de9815573553bcbb136d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415c3e5213385a10153c4d6a7eaa92aba8d5d7edade7de9815573553bcbb136d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:02 np0005596060 podman[272650]: 2026-01-26 18:20:02.566059899 +0000 UTC m=+0.170911248 container init 5d5163a95d212c2c91c5e226af9e7663d7c4a406ca526dacd6964854a767137b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:20:02 np0005596060 podman[272650]: 2026-01-26 18:20:02.574591816 +0000 UTC m=+0.179443135 container start 5d5163a95d212c2c91c5e226af9e7663d7c4a406ca526dacd6964854a767137b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:20:02 np0005596060 podman[272650]: 2026-01-26 18:20:02.673004695 +0000 UTC m=+0.277856014 container attach 5d5163a95d212c2c91c5e226af9e7663d7c4a406ca526dacd6964854a767137b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:20:02 np0005596060 nova_compute[247421]: 2026-01-26 18:20:02.740 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:03 np0005596060 recursing_benz[272666]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:20:03 np0005596060 recursing_benz[272666]: --> relative data size: 1.0
Jan 26 13:20:03 np0005596060 recursing_benz[272666]: --> All data devices are unavailable
Jan 26 13:20:03 np0005596060 systemd[1]: libpod-5d5163a95d212c2c91c5e226af9e7663d7c4a406ca526dacd6964854a767137b.scope: Deactivated successfully.
Jan 26 13:20:03 np0005596060 podman[272650]: 2026-01-26 18:20:03.391315217 +0000 UTC m=+0.996166536 container died 5d5163a95d212c2c91c5e226af9e7663d7c4a406ca526dacd6964854a767137b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 13:20:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:03 np0005596060 systemd[1]: var-lib-containers-storage-overlay-415c3e5213385a10153c4d6a7eaa92aba8d5d7edade7de9815573553bcbb136d-merged.mount: Deactivated successfully.
Jan 26 13:20:03 np0005596060 podman[272650]: 2026-01-26 18:20:03.595537943 +0000 UTC m=+1.200389262 container remove 5d5163a95d212c2c91c5e226af9e7663d7c4a406ca526dacd6964854a767137b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_benz, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:20:03 np0005596060 systemd[1]: libpod-conmon-5d5163a95d212c2c91c5e226af9e7663d7c4a406ca526dacd6964854a767137b.scope: Deactivated successfully.
Jan 26 13:20:03 np0005596060 nova_compute[247421]: 2026-01-26 18:20:03.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:20:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:20:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:04.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:04 np0005596060 podman[272830]: 2026-01-26 18:20:04.213902176 +0000 UTC m=+0.064104695 container create d008841cecefa3aa4cf123b59057ec852e2b1c7600366ae516f017008ddf215b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:20:04 np0005596060 podman[272830]: 2026-01-26 18:20:04.171041124 +0000 UTC m=+0.021243663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:20:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:04 np0005596060 systemd[1]: Started libpod-conmon-d008841cecefa3aa4cf123b59057ec852e2b1c7600366ae516f017008ddf215b.scope.
Jan 26 13:20:04 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:20:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:04.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:04 np0005596060 podman[272830]: 2026-01-26 18:20:04.643757384 +0000 UTC m=+0.493959923 container init d008841cecefa3aa4cf123b59057ec852e2b1c7600366ae516f017008ddf215b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:20:04 np0005596060 nova_compute[247421]: 2026-01-26 18:20:04.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:20:04 np0005596060 podman[272830]: 2026-01-26 18:20:04.65104971 +0000 UTC m=+0.501252229 container start d008841cecefa3aa4cf123b59057ec852e2b1c7600366ae516f017008ddf215b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:20:04 np0005596060 goofy_snyder[272847]: 167 167
Jan 26 13:20:04 np0005596060 systemd[1]: libpod-d008841cecefa3aa4cf123b59057ec852e2b1c7600366ae516f017008ddf215b.scope: Deactivated successfully.
Jan 26 13:20:04 np0005596060 podman[272830]: 2026-01-26 18:20:04.733687047 +0000 UTC m=+0.583889576 container attach d008841cecefa3aa4cf123b59057ec852e2b1c7600366ae516f017008ddf215b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:20:04 np0005596060 podman[272830]: 2026-01-26 18:20:04.734942649 +0000 UTC m=+0.585145188 container died d008841cecefa3aa4cf123b59057ec852e2b1c7600366ae516f017008ddf215b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:20:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c8c7941c1272f8117561e252b872e6f56e00fc0490420d0028d87d15fae0616e-merged.mount: Deactivated successfully.
Jan 26 13:20:05 np0005596060 podman[272830]: 2026-01-26 18:20:05.091105777 +0000 UTC m=+0.941308336 container remove d008841cecefa3aa4cf123b59057ec852e2b1c7600366ae516f017008ddf215b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:20:05 np0005596060 systemd[1]: libpod-conmon-d008841cecefa3aa4cf123b59057ec852e2b1c7600366ae516f017008ddf215b.scope: Deactivated successfully.
Jan 26 13:20:05 np0005596060 podman[272871]: 2026-01-26 18:20:05.276625707 +0000 UTC m=+0.055775853 container create 12bdc48898903e749aa8d21c3463d997802abb46177e88da26ff64fcb3d14488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 13:20:05 np0005596060 systemd[1]: Started libpod-conmon-12bdc48898903e749aa8d21c3463d997802abb46177e88da26ff64fcb3d14488.scope.
Jan 26 13:20:05 np0005596060 podman[272871]: 2026-01-26 18:20:05.249626639 +0000 UTC m=+0.028776785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:20:05 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:20:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062d34d1ce7cdac20df300b0ae5579b281959dead4026c8d9a66602f851875b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062d34d1ce7cdac20df300b0ae5579b281959dead4026c8d9a66602f851875b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062d34d1ce7cdac20df300b0ae5579b281959dead4026c8d9a66602f851875b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/062d34d1ce7cdac20df300b0ae5579b281959dead4026c8d9a66602f851875b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:05 np0005596060 podman[272871]: 2026-01-26 18:20:05.479042727 +0000 UTC m=+0.258192873 container init 12bdc48898903e749aa8d21c3463d997802abb46177e88da26ff64fcb3d14488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 13:20:05 np0005596060 podman[272871]: 2026-01-26 18:20:05.484688081 +0000 UTC m=+0.263838217 container start 12bdc48898903e749aa8d21c3463d997802abb46177e88da26ff64fcb3d14488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:20:05 np0005596060 podman[272871]: 2026-01-26 18:20:05.488379155 +0000 UTC m=+0.267529371 container attach 12bdc48898903e749aa8d21c3463d997802abb46177e88da26ff64fcb3d14488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:20:05 np0005596060 nova_compute[247421]: 2026-01-26 18:20:05.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:20:05 np0005596060 nova_compute[247421]: 2026-01-26 18:20:05.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:20:05 np0005596060 nova_compute[247421]: 2026-01-26 18:20:05.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:20:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:20:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:06.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]: {
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:    "1": [
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:        {
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "devices": [
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "/dev/loop3"
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            ],
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "lv_name": "ceph_lv0",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "lv_size": "7511998464",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "name": "ceph_lv0",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "tags": {
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.cluster_name": "ceph",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.crush_device_class": "",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.encrypted": "0",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.osd_id": "1",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.type": "block",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:                "ceph.vdo": "0"
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            },
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "type": "block",
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:            "vg_name": "ceph_vg0"
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:        }
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]:    ]
Jan 26 13:20:06 np0005596060 tender_hodgkin[272887]: }
Jan 26 13:20:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:06 np0005596060 systemd[1]: libpod-12bdc48898903e749aa8d21c3463d997802abb46177e88da26ff64fcb3d14488.scope: Deactivated successfully.
Jan 26 13:20:06 np0005596060 podman[272897]: 2026-01-26 18:20:06.332586036 +0000 UTC m=+0.024706981 container died 12bdc48898903e749aa8d21c3463d997802abb46177e88da26ff64fcb3d14488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 13:20:06 np0005596060 nova_compute[247421]: 2026-01-26 18:20:06.434 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:06.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:06 np0005596060 nova_compute[247421]: 2026-01-26 18:20:06.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:20:06 np0005596060 nova_compute[247421]: 2026-01-26 18:20:06.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:20:06 np0005596060 nova_compute[247421]: 2026-01-26 18:20:06.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:20:06 np0005596060 nova_compute[247421]: 2026-01-26 18:20:06.688 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:20:06 np0005596060 systemd[1]: var-lib-containers-storage-overlay-062d34d1ce7cdac20df300b0ae5579b281959dead4026c8d9a66602f851875b0-merged.mount: Deactivated successfully.
Jan 26 13:20:07 np0005596060 podman[272897]: 2026-01-26 18:20:07.098697036 +0000 UTC m=+0.790817951 container remove 12bdc48898903e749aa8d21c3463d997802abb46177e88da26ff64fcb3d14488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 13:20:07 np0005596060 systemd[1]: libpod-conmon-12bdc48898903e749aa8d21c3463d997802abb46177e88da26ff64fcb3d14488.scope: Deactivated successfully.
Jan 26 13:20:07 np0005596060 nova_compute[247421]: 2026-01-26 18:20:07.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:20:07 np0005596060 nova_compute[247421]: 2026-01-26 18:20:07.744 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:07 np0005596060 nova_compute[247421]: 2026-01-26 18:20:07.768 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:20:07 np0005596060 nova_compute[247421]: 2026-01-26 18:20:07.769 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:20:07 np0005596060 nova_compute[247421]: 2026-01-26 18:20:07.769 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:20:07 np0005596060 nova_compute[247421]: 2026-01-26 18:20:07.770 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:20:07 np0005596060 nova_compute[247421]: 2026-01-26 18:20:07.770 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:20:08 np0005596060 podman[273056]: 2026-01-26 18:20:08.01796439 +0000 UTC m=+0.078982144 container create 31e1ae28385bc0865a886232ab9a7f20140988de6d5a7bf8d6f7f47aeb332f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:20:08 np0005596060 podman[273056]: 2026-01-26 18:20:07.967123324 +0000 UTC m=+0.028141178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:20:08 np0005596060 systemd[1]: Started libpod-conmon-31e1ae28385bc0865a886232ab9a7f20140988de6d5a7bf8d6f7f47aeb332f69.scope.
Jan 26 13:20:08 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:20:08 np0005596060 podman[273056]: 2026-01-26 18:20:08.176131262 +0000 UTC m=+0.237149066 container init 31e1ae28385bc0865a886232ab9a7f20140988de6d5a7bf8d6f7f47aeb332f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True)
Jan 26 13:20:08 np0005596060 podman[273056]: 2026-01-26 18:20:08.185415259 +0000 UTC m=+0.246433063 container start 31e1ae28385bc0865a886232ab9a7f20140988de6d5a7bf8d6f7f47aeb332f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mahavira, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:20:08 np0005596060 suspicious_mahavira[273087]: 167 167
Jan 26 13:20:08 np0005596060 systemd[1]: libpod-31e1ae28385bc0865a886232ab9a7f20140988de6d5a7bf8d6f7f47aeb332f69.scope: Deactivated successfully.
Jan 26 13:20:08 np0005596060 podman[273056]: 2026-01-26 18:20:08.20113212 +0000 UTC m=+0.262149884 container attach 31e1ae28385bc0865a886232ab9a7f20140988de6d5a7bf8d6f7f47aeb332f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:20:08 np0005596060 podman[273056]: 2026-01-26 18:20:08.201654693 +0000 UTC m=+0.262672467 container died 31e1ae28385bc0865a886232ab9a7f20140988de6d5a7bf8d6f7f47aeb332f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mahavira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 26 13:20:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:20:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:08.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:20:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:20:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/284597261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:20:08 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8d3fe41d190c8bf31218ba6bbfdd6c782ed40872269fd03461f79a3c9d8ff3bf-merged.mount: Deactivated successfully.
Jan 26 13:20:08 np0005596060 nova_compute[247421]: 2026-01-26 18:20:08.251 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:20:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:08 np0005596060 podman[273056]: 2026-01-26 18:20:08.409136932 +0000 UTC m=+0.470154696 container remove 31e1ae28385bc0865a886232ab9a7f20140988de6d5a7bf8d6f7f47aeb332f69 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 13:20:08 np0005596060 systemd[1]: libpod-conmon-31e1ae28385bc0865a886232ab9a7f20140988de6d5a7bf8d6f7f47aeb332f69.scope: Deactivated successfully.
Jan 26 13:20:08 np0005596060 nova_compute[247421]: 2026-01-26 18:20:08.426 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:20:08 np0005596060 nova_compute[247421]: 2026-01-26 18:20:08.427 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4770MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:20:08 np0005596060 nova_compute[247421]: 2026-01-26 18:20:08.428 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:20:08 np0005596060 nova_compute[247421]: 2026-01-26 18:20:08.428 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:20:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:08.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:08 np0005596060 podman[273115]: 2026-01-26 18:20:08.585835986 +0000 UTC m=+0.051523105 container create 100074bce09fefe774e0a92689dbd561d76492ab7bdd1a9152399edaa5fb00e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:20:08 np0005596060 systemd[1]: Started libpod-conmon-100074bce09fefe774e0a92689dbd561d76492ab7bdd1a9152399edaa5fb00e9.scope.
Jan 26 13:20:08 np0005596060 podman[273115]: 2026-01-26 18:20:08.561576447 +0000 UTC m=+0.027263646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:20:08 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:20:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9119d43e32fa476fe15c9edf9eef53e70ed70ec86ef66550f45a43e7f520d32c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9119d43e32fa476fe15c9edf9eef53e70ed70ec86ef66550f45a43e7f520d32c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9119d43e32fa476fe15c9edf9eef53e70ed70ec86ef66550f45a43e7f520d32c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9119d43e32fa476fe15c9edf9eef53e70ed70ec86ef66550f45a43e7f520d32c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:20:08 np0005596060 podman[273115]: 2026-01-26 18:20:08.713611883 +0000 UTC m=+0.179299052 container init 100074bce09fefe774e0a92689dbd561d76492ab7bdd1a9152399edaa5fb00e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 13:20:08 np0005596060 podman[273115]: 2026-01-26 18:20:08.720580881 +0000 UTC m=+0.186268010 container start 100074bce09fefe774e0a92689dbd561d76492ab7bdd1a9152399edaa5fb00e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:20:08 np0005596060 podman[273115]: 2026-01-26 18:20:08.778871067 +0000 UTC m=+0.244558236 container attach 100074bce09fefe774e0a92689dbd561d76492ab7bdd1a9152399edaa5fb00e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 13:20:08 np0005596060 nova_compute[247421]: 2026-01-26 18:20:08.950 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:20:08 np0005596060 nova_compute[247421]: 2026-01-26 18:20:08.951 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:20:08 np0005596060 nova_compute[247421]: 2026-01-26 18:20:08.968 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:20:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:20:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4010421100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:20:09 np0005596060 nova_compute[247421]: 2026-01-26 18:20:09.446 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:20:09 np0005596060 nova_compute[247421]: 2026-01-26 18:20:09.454 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:20:09 np0005596060 eloquent_chatterjee[273131]: {
Jan 26 13:20:09 np0005596060 eloquent_chatterjee[273131]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:20:09 np0005596060 eloquent_chatterjee[273131]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:20:09 np0005596060 eloquent_chatterjee[273131]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:20:09 np0005596060 eloquent_chatterjee[273131]:        "osd_id": 1,
Jan 26 13:20:09 np0005596060 eloquent_chatterjee[273131]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:20:09 np0005596060 eloquent_chatterjee[273131]:        "type": "bluestore"
Jan 26 13:20:09 np0005596060 eloquent_chatterjee[273131]:    }
Jan 26 13:20:09 np0005596060 eloquent_chatterjee[273131]: }
Jan 26 13:20:09 np0005596060 systemd[1]: libpod-100074bce09fefe774e0a92689dbd561d76492ab7bdd1a9152399edaa5fb00e9.scope: Deactivated successfully.
Jan 26 13:20:09 np0005596060 podman[273115]: 2026-01-26 18:20:09.578576923 +0000 UTC m=+1.044264032 container died 100074bce09fefe774e0a92689dbd561d76492ab7bdd1a9152399edaa5fb00e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:20:09 np0005596060 nova_compute[247421]: 2026-01-26 18:20:09.582 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:20:09 np0005596060 nova_compute[247421]: 2026-01-26 18:20:09.584 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:20:09 np0005596060 nova_compute[247421]: 2026-01-26 18:20:09.584 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:20:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-9119d43e32fa476fe15c9edf9eef53e70ed70ec86ef66550f45a43e7f520d32c-merged.mount: Deactivated successfully.
Jan 26 13:20:09 np0005596060 podman[273115]: 2026-01-26 18:20:09.758747856 +0000 UTC m=+1.224434975 container remove 100074bce09fefe774e0a92689dbd561d76492ab7bdd1a9152399edaa5fb00e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:20:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:20:09 np0005596060 systemd[1]: libpod-conmon-100074bce09fefe774e0a92689dbd561d76492ab7bdd1a9152399edaa5fb00e9.scope: Deactivated successfully.
Jan 26 13:20:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:20:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:20:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:20:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:10.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:20:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:20:10 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 78d40160-5409-45c7-ac70-270211ac64b8 does not exist
Jan 26 13:20:10 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d83f5c8a-e5e2-4407-8e51-5a02b1390c1a does not exist
Jan 26 13:20:10 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b561b933-104f-4933-be22-7b2cc945bdeb does not exist
Jan 26 13:20:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:20:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:10.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:20:10 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:20:10 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:20:11 np0005596060 nova_compute[247421]: 2026-01-26 18:20:11.435 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:20:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:12.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:20:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 26 13:20:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:12.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 26 13:20:12 np0005596060 nova_compute[247421]: 2026-01-26 18:20:12.747 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:20:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:20:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:20:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:20:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:20:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:20:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:14.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:14.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:20:14.750 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:20:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:20:14.751 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:20:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:20:14.751 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:20:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:16.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:16 np0005596060 nova_compute[247421]: 2026-01-26 18:20:16.438 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:16.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:16 np0005596060 nova_compute[247421]: 2026-01-26 18:20:16.585 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:20:17 np0005596060 nova_compute[247421]: 2026-01-26 18:20:17.790 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:18.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:18.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:20.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:20.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:21 np0005596060 nova_compute[247421]: 2026-01-26 18:20:21.440 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:22.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:20:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:22.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:20:22 np0005596060 nova_compute[247421]: 2026-01-26 18:20:22.792 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:24.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:25 np0005596060 podman[273296]: 2026-01-26 18:20:25.81524427 +0000 UTC m=+0.064524946 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 13:20:25 np0005596060 podman[273297]: 2026-01-26 18:20:25.839295673 +0000 UTC m=+0.092088198 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:20:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:26.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:26 np0005596060 nova_compute[247421]: 2026-01-26 18:20:26.442 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:26.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:27 np0005596060 nova_compute[247421]: 2026-01-26 18:20:27.795 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:20:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:28.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:20:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:28.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:29 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:20:29.708 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:20:29 np0005596060 nova_compute[247421]: 2026-01-26 18:20:29.708 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:29 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:20:29.710 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:20:29 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:20:29.710 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:20:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:30.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:30.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:31 np0005596060 nova_compute[247421]: 2026-01-26 18:20:31.444 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:32.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:32.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:32 np0005596060 nova_compute[247421]: 2026-01-26 18:20:32.833 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:34.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:36.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:36 np0005596060 nova_compute[247421]: 2026-01-26 18:20:36.446 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:36.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:37 np0005596060 nova_compute[247421]: 2026-01-26 18:20:37.836 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:38.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:38.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:40.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:20:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3173690218' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:20:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:20:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3173690218' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:20:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:40.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:41 np0005596060 nova_compute[247421]: 2026-01-26 18:20:41.448 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:42.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:42.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:42 np0005596060 nova_compute[247421]: 2026-01-26 18:20:42.838 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:20:44
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', '.mgr', 'default.rgw.control']
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:20:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:44.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:44.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:20:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:20:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:46.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:46 np0005596060 nova_compute[247421]: 2026-01-26 18:20:46.450 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:46.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:47 np0005596060 nova_compute[247421]: 2026-01-26 18:20:47.841 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:20:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:48.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:20:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:48.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:50.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:50.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:51 np0005596060 nova_compute[247421]: 2026-01-26 18:20:51.497 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:52.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:20:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:52.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:20:52 np0005596060 nova_compute[247421]: 2026-01-26 18:20:52.885 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:20:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:20:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:54.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:20:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:54.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:56.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:20:56 np0005596060 nova_compute[247421]: 2026-01-26 18:20:56.499 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:56.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:56 np0005596060 podman[273454]: 2026-01-26 18:20:56.784037079 +0000 UTC m=+0.048764554 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 13:20:56 np0005596060 podman[273455]: 2026-01-26 18:20:56.816009844 +0000 UTC m=+0.076105991 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:20:57 np0005596060 nova_compute[247421]: 2026-01-26 18:20:57.887 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:20:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:20:58.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 26 13:20:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:20:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:20:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:20:58.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:20:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:00.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 41 MiB data, 269 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 26 13:21:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:00.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:01 np0005596060 nova_compute[247421]: 2026-01-26 18:21:01.501 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:01 np0005596060 nova_compute[247421]: 2026-01-26 18:21:01.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:21:01 np0005596060 nova_compute[247421]: 2026-01-26 18:21:01.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:21:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:21:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:02.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:21:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 49 MiB data, 273 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 398 KiB/s wr, 39 op/s
Jan 26 13:21:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:21:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:02.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:21:02 np0005596060 nova_compute[247421]: 2026-01-26 18:21:02.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:21:02 np0005596060 nova_compute[247421]: 2026-01-26 18:21:02.888 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:03 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00021555985948259817 of space, bias 1.0, pg target 0.06466795784477945 quantized to 32 (current 32)
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:21:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:21:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:04.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 71 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 40 op/s
Jan 26 13:21:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:04.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:04 np0005596060 nova_compute[247421]: 2026-01-26 18:21:04.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:21:05 np0005596060 nova_compute[247421]: 2026-01-26 18:21:05.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:21:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:06.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 71 MiB data, 286 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 40 op/s
Jan 26 13:21:06 np0005596060 nova_compute[247421]: 2026-01-26 18:21:06.502 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:06.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:06 np0005596060 nova_compute[247421]: 2026-01-26 18:21:06.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:21:06 np0005596060 nova_compute[247421]: 2026-01-26 18:21:06.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:21:06 np0005596060 nova_compute[247421]: 2026-01-26 18:21:06.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:21:07 np0005596060 nova_compute[247421]: 2026-01-26 18:21:07.884 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:21:07 np0005596060 nova_compute[247421]: 2026-01-26 18:21:07.885 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:21:07 np0005596060 nova_compute[247421]: 2026-01-26 18:21:07.891 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:08.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 26 13:21:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:08.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:08 np0005596060 nova_compute[247421]: 2026-01-26 18:21:08.881 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:21:09 np0005596060 nova_compute[247421]: 2026-01-26 18:21:09.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:21:09 np0005596060 nova_compute[247421]: 2026-01-26 18:21:09.706 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:21:09 np0005596060 nova_compute[247421]: 2026-01-26 18:21:09.706 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:21:09 np0005596060 nova_compute[247421]: 2026-01-26 18:21:09.706 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:21:09 np0005596060 nova_compute[247421]: 2026-01-26 18:21:09.707 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:21:09 np0005596060 nova_compute[247421]: 2026-01-26 18:21:09.707 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:21:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:21:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1824671925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:21:10 np0005596060 nova_compute[247421]: 2026-01-26 18:21:10.171 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:21:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:10.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 13:21:10 np0005596060 nova_compute[247421]: 2026-01-26 18:21:10.356 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:21:10 np0005596060 nova_compute[247421]: 2026-01-26 18:21:10.357 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4844MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:21:10 np0005596060 nova_compute[247421]: 2026-01-26 18:21:10.357 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:21:10 np0005596060 nova_compute[247421]: 2026-01-26 18:21:10.358 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:21:10 np0005596060 nova_compute[247421]: 2026-01-26 18:21:10.442 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:21:10 np0005596060 nova_compute[247421]: 2026-01-26 18:21:10.442 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:21:10 np0005596060 nova_compute[247421]: 2026-01-26 18:21:10.469 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:21:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:10.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:21:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821830067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:21:10 np0005596060 nova_compute[247421]: 2026-01-26 18:21:10.909 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:21:10 np0005596060 nova_compute[247421]: 2026-01-26 18:21:10.921 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:21:11 np0005596060 nova_compute[247421]: 2026-01-26 18:21:11.104 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:21:11 np0005596060 nova_compute[247421]: 2026-01-26 18:21:11.106 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:21:11 np0005596060 nova_compute[247421]: 2026-01-26 18:21:11.106 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:21:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:21:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 3635 syncs, 3.64 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2422 writes, 8105 keys, 2422 commit groups, 1.0 writes per commit group, ingest: 5.56 MB, 0.01 MB/s#012Interval WAL: 2422 writes, 944 syncs, 2.57 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 13:21:11 np0005596060 nova_compute[247421]: 2026-01-26 18:21:11.504 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:21:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8b6b5c62-cbdd-4b08-bbf2-1c718fba6eca does not exist
Jan 26 13:21:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4809cba9-6273-411d-a38a-1648a52c777f does not exist
Jan 26 13:21:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev abb78734-9d2b-4ef9-8842-a9547812b221 does not exist
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:21:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:21:12 np0005596060 podman[273824]: 2026-01-26 18:21:12.147362033 +0000 UTC m=+0.054410628 container create 2d825c8d3808300f27a8399541c9476e4563d97c9ed105fdf99aca577c6d6c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:21:12 np0005596060 systemd[1]: Started libpod-conmon-2d825c8d3808300f27a8399541c9476e4563d97c9ed105fdf99aca577c6d6c3a.scope.
Jan 26 13:21:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:21:12 np0005596060 podman[273824]: 2026-01-26 18:21:12.21669429 +0000 UTC m=+0.123742905 container init 2d825c8d3808300f27a8399541c9476e4563d97c9ed105fdf99aca577c6d6c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:21:12 np0005596060 podman[273824]: 2026-01-26 18:21:12.224406157 +0000 UTC m=+0.131454752 container start 2d825c8d3808300f27a8399541c9476e4563d97c9ed105fdf99aca577c6d6c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:21:12 np0005596060 podman[273824]: 2026-01-26 18:21:12.130949054 +0000 UTC m=+0.037997669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:21:12 np0005596060 strange_thompson[273841]: 167 167
Jan 26 13:21:12 np0005596060 podman[273824]: 2026-01-26 18:21:12.228130282 +0000 UTC m=+0.135178907 container attach 2d825c8d3808300f27a8399541c9476e4563d97c9ed105fdf99aca577c6d6c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:21:12 np0005596060 systemd[1]: libpod-2d825c8d3808300f27a8399541c9476e4563d97c9ed105fdf99aca577c6d6c3a.scope: Deactivated successfully.
Jan 26 13:21:12 np0005596060 podman[273824]: 2026-01-26 18:21:12.229325872 +0000 UTC m=+0.136374487 container died 2d825c8d3808300f27a8399541c9476e4563d97c9ed105fdf99aca577c6d6c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:21:12 np0005596060 systemd[1]: var-lib-containers-storage-overlay-742253c17f6215409fac15089b5d762015c36577d79bbcf67f1c9601a28f7e63-merged.mount: Deactivated successfully.
Jan 26 13:21:12 np0005596060 podman[273824]: 2026-01-26 18:21:12.267496645 +0000 UTC m=+0.174545240 container remove 2d825c8d3808300f27a8399541c9476e4563d97c9ed105fdf99aca577c6d6c3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_thompson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 13:21:12 np0005596060 systemd[1]: libpod-conmon-2d825c8d3808300f27a8399541c9476e4563d97c9ed105fdf99aca577c6d6c3a.scope: Deactivated successfully.
Jan 26 13:21:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:12.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 13:21:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:21:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:21:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:21:12 np0005596060 podman[273865]: 2026-01-26 18:21:12.427029772 +0000 UTC m=+0.039907178 container create 7cd9ebe06f5174dc4266e1e0df1b7ac9069d2a9164981cdad18ced2c66e4e0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 26 13:21:12 np0005596060 systemd[1]: Started libpod-conmon-7cd9ebe06f5174dc4266e1e0df1b7ac9069d2a9164981cdad18ced2c66e4e0e3.scope.
Jan 26 13:21:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:21:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ad2d858da6796a97dfbe7375eded9dcca8d336fd4365736a6b857e16fa03f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ad2d858da6796a97dfbe7375eded9dcca8d336fd4365736a6b857e16fa03f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ad2d858da6796a97dfbe7375eded9dcca8d336fd4365736a6b857e16fa03f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ad2d858da6796a97dfbe7375eded9dcca8d336fd4365736a6b857e16fa03f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19ad2d858da6796a97dfbe7375eded9dcca8d336fd4365736a6b857e16fa03f5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:12 np0005596060 podman[273865]: 2026-01-26 18:21:12.408413238 +0000 UTC m=+0.021290674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:21:12 np0005596060 podman[273865]: 2026-01-26 18:21:12.509132305 +0000 UTC m=+0.122009731 container init 7cd9ebe06f5174dc4266e1e0df1b7ac9069d2a9164981cdad18ced2c66e4e0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_franklin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 13:21:12 np0005596060 podman[273865]: 2026-01-26 18:21:12.518497184 +0000 UTC m=+0.131374610 container start 7cd9ebe06f5174dc4266e1e0df1b7ac9069d2a9164981cdad18ced2c66e4e0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_franklin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 13:21:12 np0005596060 podman[273865]: 2026-01-26 18:21:12.52188867 +0000 UTC m=+0.134766096 container attach 7cd9ebe06f5174dc4266e1e0df1b7ac9069d2a9164981cdad18ced2c66e4e0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:21:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:12.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:21:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2997266158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:21:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:21:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2997266158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:21:12 np0005596060 nova_compute[247421]: 2026-01-26 18:21:12.894 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:13 np0005596060 tender_franklin[273881]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:21:13 np0005596060 tender_franklin[273881]: --> relative data size: 1.0
Jan 26 13:21:13 np0005596060 tender_franklin[273881]: --> All data devices are unavailable
Jan 26 13:21:13 np0005596060 systemd[1]: libpod-7cd9ebe06f5174dc4266e1e0df1b7ac9069d2a9164981cdad18ced2c66e4e0e3.scope: Deactivated successfully.
Jan 26 13:21:13 np0005596060 podman[273865]: 2026-01-26 18:21:13.310141974 +0000 UTC m=+0.923019380 container died 7cd9ebe06f5174dc4266e1e0df1b7ac9069d2a9164981cdad18ced2c66e4e0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 13:21:13 np0005596060 systemd[1]: var-lib-containers-storage-overlay-19ad2d858da6796a97dfbe7375eded9dcca8d336fd4365736a6b857e16fa03f5-merged.mount: Deactivated successfully.
Jan 26 13:21:13 np0005596060 podman[273865]: 2026-01-26 18:21:13.362929049 +0000 UTC m=+0.975806455 container remove 7cd9ebe06f5174dc4266e1e0df1b7ac9069d2a9164981cdad18ced2c66e4e0e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_franklin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:21:13 np0005596060 systemd[1]: libpod-conmon-7cd9ebe06f5174dc4266e1e0df1b7ac9069d2a9164981cdad18ced2c66e4e0e3.scope: Deactivated successfully.
Jan 26 13:21:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:13 np0005596060 podman[274096]: 2026-01-26 18:21:13.935121816 +0000 UTC m=+0.036389109 container create cc6a97631e18f20ca78d00e689730ede99462c511f93d24944911309fb4a6e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brown, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:21:13 np0005596060 systemd[1]: Started libpod-conmon-cc6a97631e18f20ca78d00e689730ede99462c511f93d24944911309fb4a6e20.scope.
Jan 26 13:21:14 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:21:14 np0005596060 podman[274096]: 2026-01-26 18:21:13.920244407 +0000 UTC m=+0.021511730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:21:14 np0005596060 podman[274096]: 2026-01-26 18:21:14.023440808 +0000 UTC m=+0.124708101 container init cc6a97631e18f20ca78d00e689730ede99462c511f93d24944911309fb4a6e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 13:21:14 np0005596060 podman[274096]: 2026-01-26 18:21:14.029795819 +0000 UTC m=+0.131063102 container start cc6a97631e18f20ca78d00e689730ede99462c511f93d24944911309fb4a6e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 13:21:14 np0005596060 podman[274096]: 2026-01-26 18:21:14.033082133 +0000 UTC m=+0.134349456 container attach cc6a97631e18f20ca78d00e689730ede99462c511f93d24944911309fb4a6e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:21:14 np0005596060 inspiring_brown[274112]: 167 167
Jan 26 13:21:14 np0005596060 systemd[1]: libpod-cc6a97631e18f20ca78d00e689730ede99462c511f93d24944911309fb4a6e20.scope: Deactivated successfully.
Jan 26 13:21:14 np0005596060 podman[274096]: 2026-01-26 18:21:14.035494465 +0000 UTC m=+0.136761758 container died cc6a97631e18f20ca78d00e689730ede99462c511f93d24944911309fb4a6e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:21:14 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1e930f51748ad98e6e50835b8a63e4d5505b82fe8a04a21d8dbe5966f30d5844-merged.mount: Deactivated successfully.
Jan 26 13:21:14 np0005596060 podman[274096]: 2026-01-26 18:21:14.070954169 +0000 UTC m=+0.172221462 container remove cc6a97631e18f20ca78d00e689730ede99462c511f93d24944911309fb4a6e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:21:14 np0005596060 systemd[1]: libpod-conmon-cc6a97631e18f20ca78d00e689730ede99462c511f93d24944911309fb4a6e20.scope: Deactivated successfully.
Jan 26 13:21:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:21:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:21:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:21:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:21:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:21:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:21:14 np0005596060 podman[274136]: 2026-01-26 18:21:14.229946252 +0000 UTC m=+0.041527390 container create 8f966a7544934a1382d207c35cdf3302839fe58e7a982dd86ed9f58a82c1ee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:21:14 np0005596060 systemd[1]: Started libpod-conmon-8f966a7544934a1382d207c35cdf3302839fe58e7a982dd86ed9f58a82c1ee08.scope.
Jan 26 13:21:14 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:21:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e43c1e9b0a6c8a07dd58202b62418a2e2ad2bad48653bac732a26ff43e0672/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:14.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e43c1e9b0a6c8a07dd58202b62418a2e2ad2bad48653bac732a26ff43e0672/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e43c1e9b0a6c8a07dd58202b62418a2e2ad2bad48653bac732a26ff43e0672/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89e43c1e9b0a6c8a07dd58202b62418a2e2ad2bad48653bac732a26ff43e0672/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:14 np0005596060 podman[274136]: 2026-01-26 18:21:14.21298858 +0000 UTC m=+0.024569738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:21:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 1.4 MiB/s wr, 4 op/s
Jan 26 13:21:14 np0005596060 podman[274136]: 2026-01-26 18:21:14.318864479 +0000 UTC m=+0.130445637 container init 8f966a7544934a1382d207c35cdf3302839fe58e7a982dd86ed9f58a82c1ee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:21:14 np0005596060 podman[274136]: 2026-01-26 18:21:14.327195471 +0000 UTC m=+0.138776609 container start 8f966a7544934a1382d207c35cdf3302839fe58e7a982dd86ed9f58a82c1ee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:21:14 np0005596060 podman[274136]: 2026-01-26 18:21:14.331947922 +0000 UTC m=+0.143529080 container attach 8f966a7544934a1382d207c35cdf3302839fe58e7a982dd86ed9f58a82c1ee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:21:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.004000102s ======
Jan 26 13:21:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:14.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000102s
Jan 26 13:21:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:21:14.751 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:21:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:21:14.754 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:21:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:21:14.754 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]: {
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:    "1": [
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:        {
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "devices": [
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "/dev/loop3"
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            ],
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "lv_name": "ceph_lv0",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "lv_size": "7511998464",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "name": "ceph_lv0",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "tags": {
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.cluster_name": "ceph",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.crush_device_class": "",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.encrypted": "0",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.osd_id": "1",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.type": "block",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:                "ceph.vdo": "0"
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            },
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "type": "block",
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:            "vg_name": "ceph_vg0"
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:        }
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]:    ]
Jan 26 13:21:15 np0005596060 admiring_nightingale[274153]: }
Jan 26 13:21:15 np0005596060 systemd[1]: libpod-8f966a7544934a1382d207c35cdf3302839fe58e7a982dd86ed9f58a82c1ee08.scope: Deactivated successfully.
Jan 26 13:21:15 np0005596060 podman[274136]: 2026-01-26 18:21:15.101568062 +0000 UTC m=+0.913149200 container died 8f966a7544934a1382d207c35cdf3302839fe58e7a982dd86ed9f58a82c1ee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:21:15 np0005596060 systemd[1]: var-lib-containers-storage-overlay-89e43c1e9b0a6c8a07dd58202b62418a2e2ad2bad48653bac732a26ff43e0672-merged.mount: Deactivated successfully.
Jan 26 13:21:15 np0005596060 podman[274136]: 2026-01-26 18:21:15.166752363 +0000 UTC m=+0.978333501 container remove 8f966a7544934a1382d207c35cdf3302839fe58e7a982dd86ed9f58a82c1ee08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:21:15 np0005596060 systemd[1]: libpod-conmon-8f966a7544934a1382d207c35cdf3302839fe58e7a982dd86ed9f58a82c1ee08.scope: Deactivated successfully.
Jan 26 13:21:15 np0005596060 podman[274315]: 2026-01-26 18:21:15.796401144 +0000 UTC m=+0.044992587 container create 837d3ea946304edccb3f1b2bbfe7f72ae4f6abf4ca21a51b9fdefcffe48f326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:21:15 np0005596060 systemd[1]: Started libpod-conmon-837d3ea946304edccb3f1b2bbfe7f72ae4f6abf4ca21a51b9fdefcffe48f326d.scope.
Jan 26 13:21:15 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:21:15 np0005596060 podman[274315]: 2026-01-26 18:21:15.867007164 +0000 UTC m=+0.115598627 container init 837d3ea946304edccb3f1b2bbfe7f72ae4f6abf4ca21a51b9fdefcffe48f326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keldysh, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 13:21:15 np0005596060 podman[274315]: 2026-01-26 18:21:15.77855801 +0000 UTC m=+0.027149463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:21:15 np0005596060 podman[274315]: 2026-01-26 18:21:15.874129866 +0000 UTC m=+0.122721309 container start 837d3ea946304edccb3f1b2bbfe7f72ae4f6abf4ca21a51b9fdefcffe48f326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 13:21:15 np0005596060 podman[274315]: 2026-01-26 18:21:15.87783691 +0000 UTC m=+0.126428373 container attach 837d3ea946304edccb3f1b2bbfe7f72ae4f6abf4ca21a51b9fdefcffe48f326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 26 13:21:15 np0005596060 hopeful_keldysh[274331]: 167 167
Jan 26 13:21:15 np0005596060 systemd[1]: libpod-837d3ea946304edccb3f1b2bbfe7f72ae4f6abf4ca21a51b9fdefcffe48f326d.scope: Deactivated successfully.
Jan 26 13:21:15 np0005596060 conmon[274331]: conmon 837d3ea946304edccb3f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-837d3ea946304edccb3f1b2bbfe7f72ae4f6abf4ca21a51b9fdefcffe48f326d.scope/container/memory.events
Jan 26 13:21:15 np0005596060 podman[274315]: 2026-01-26 18:21:15.881159845 +0000 UTC m=+0.129751288 container died 837d3ea946304edccb3f1b2bbfe7f72ae4f6abf4ca21a51b9fdefcffe48f326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:21:15 np0005596060 systemd[1]: var-lib-containers-storage-overlay-52231ddbd56078aaab0d44d699e914d295c1b45863118e3f885956b041cb6033-merged.mount: Deactivated successfully.
Jan 26 13:21:15 np0005596060 podman[274315]: 2026-01-26 18:21:15.918295572 +0000 UTC m=+0.166887015 container remove 837d3ea946304edccb3f1b2bbfe7f72ae4f6abf4ca21a51b9fdefcffe48f326d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:21:15 np0005596060 systemd[1]: libpod-conmon-837d3ea946304edccb3f1b2bbfe7f72ae4f6abf4ca21a51b9fdefcffe48f326d.scope: Deactivated successfully.
Jan 26 13:21:16 np0005596060 podman[274357]: 2026-01-26 18:21:16.084775116 +0000 UTC m=+0.045263545 container create 5db09a3cbd1810e220a9377b1c160aa0bc7003ab18c6d5fb257c749853fe9546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:21:16 np0005596060 systemd[1]: Started libpod-conmon-5db09a3cbd1810e220a9377b1c160aa0bc7003ab18c6d5fb257c749853fe9546.scope.
Jan 26 13:21:16 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:21:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2527f55986a1a23a151303f82f369e83129f7c7058cebcf9c6dac0ded2b5f67/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2527f55986a1a23a151303f82f369e83129f7c7058cebcf9c6dac0ded2b5f67/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2527f55986a1a23a151303f82f369e83129f7c7058cebcf9c6dac0ded2b5f67/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2527f55986a1a23a151303f82f369e83129f7c7058cebcf9c6dac0ded2b5f67/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:21:16 np0005596060 podman[274357]: 2026-01-26 18:21:16.064399406 +0000 UTC m=+0.024887885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:21:16 np0005596060 podman[274357]: 2026-01-26 18:21:16.175471458 +0000 UTC m=+0.135959887 container init 5db09a3cbd1810e220a9377b1c160aa0bc7003ab18c6d5fb257c749853fe9546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 13:21:16 np0005596060 podman[274357]: 2026-01-26 18:21:16.184440137 +0000 UTC m=+0.144928566 container start 5db09a3cbd1810e220a9377b1c160aa0bc7003ab18c6d5fb257c749853fe9546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 13:21:16 np0005596060 podman[274357]: 2026-01-26 18:21:16.188375607 +0000 UTC m=+0.148864036 container attach 5db09a3cbd1810e220a9377b1c160aa0bc7003ab18c6d5fb257c749853fe9546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:21:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:16.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 88 MiB data, 290 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s rd, 369 KiB/s wr, 3 op/s
Jan 26 13:21:16 np0005596060 nova_compute[247421]: 2026-01-26 18:21:16.505 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:16.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:17 np0005596060 focused_jennings[274375]: {
Jan 26 13:21:17 np0005596060 focused_jennings[274375]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:21:17 np0005596060 focused_jennings[274375]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:21:17 np0005596060 focused_jennings[274375]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:21:17 np0005596060 focused_jennings[274375]:        "osd_id": 1,
Jan 26 13:21:17 np0005596060 focused_jennings[274375]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:21:17 np0005596060 focused_jennings[274375]:        "type": "bluestore"
Jan 26 13:21:17 np0005596060 focused_jennings[274375]:    }
Jan 26 13:21:17 np0005596060 focused_jennings[274375]: }
Jan 26 13:21:17 np0005596060 systemd[1]: libpod-5db09a3cbd1810e220a9377b1c160aa0bc7003ab18c6d5fb257c749853fe9546.scope: Deactivated successfully.
Jan 26 13:21:17 np0005596060 podman[274357]: 2026-01-26 18:21:17.113963041 +0000 UTC m=+1.074451540 container died 5db09a3cbd1810e220a9377b1c160aa0bc7003ab18c6d5fb257c749853fe9546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 13:21:17 np0005596060 nova_compute[247421]: 2026-01-26 18:21:17.897 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:18 np0005596060 nova_compute[247421]: 2026-01-26 18:21:18.106 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:21:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b2527f55986a1a23a151303f82f369e83129f7c7058cebcf9c6dac0ded2b5f67-merged.mount: Deactivated successfully.
Jan 26 13:21:18 np0005596060 podman[274357]: 2026-01-26 18:21:18.150578007 +0000 UTC m=+2.111066436 container remove 5db09a3cbd1810e220a9377b1c160aa0bc7003ab18c6d5fb257c749853fe9546 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_jennings, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:21:18 np0005596060 systemd[1]: libpod-conmon-5db09a3cbd1810e220a9377b1c160aa0bc7003ab18c6d5fb257c749853fe9546.scope: Deactivated successfully.
Jan 26 13:21:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:21:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:21:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:21:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:18.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 369 KiB/s wr, 16 op/s
Jan 26 13:21:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:21:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev eb30818a-7e07-4799-961a-2e4ee177a884 does not exist
Jan 26 13:21:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev eed0dc8c-2c2d-4ae0-a788-53cab7fea529 does not exist
Jan 26 13:21:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5a77c7a5-1a4c-4b72-91eb-415797618d76 does not exist
Jan 26 13:21:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:18.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:21:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:21:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:21:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:20.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:21:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 26 13:21:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:21:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:20.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:21:21 np0005596060 nova_compute[247421]: 2026-01-26 18:21:21.507 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:21:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:22.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:21:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 15 op/s
Jan 26 13:21:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:22.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:22 np0005596060 nova_compute[247421]: 2026-01-26 18:21:22.898 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:23 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] Check health
Jan 26 13:21:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:24.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 13 op/s
Jan 26 13:21:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:24.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 13 op/s
Jan 26 13:21:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:26.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:26 np0005596060 nova_compute[247421]: 2026-01-26 18:21:26.508 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:26.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:27 np0005596060 podman[274463]: 2026-01-26 18:21:27.797917419 +0000 UTC m=+0.059011275 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 13:21:27 np0005596060 podman[274464]: 2026-01-26 18:21:27.837508478 +0000 UTC m=+0.096614344 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 26 13:21:27 np0005596060 nova_compute[247421]: 2026-01-26 18:21:27.900 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 9.2 KiB/s rd, 597 B/s wr, 13 op/s
Jan 26 13:21:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:28.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:28.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:29 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:21:29.450 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:21:29 np0005596060 nova_compute[247421]: 2026-01-26 18:21:29.450 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:29 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:21:29.451 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:21:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:21:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:30.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:21:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:21:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:30.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:21:31 np0005596060 nova_compute[247421]: 2026-01-26 18:21:31.509 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:21:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:32.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:21:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:21:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:32.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:21:32 np0005596060 nova_compute[247421]: 2026-01-26 18:21:32.959 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:21:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:34.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:21:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:34.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:36.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:36 np0005596060 nova_compute[247421]: 2026-01-26 18:21:36.511 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:21:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:36.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:21:37 np0005596060 nova_compute[247421]: 2026-01-26 18:21:37.961 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:21:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:38.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:21:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:21:38.453 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:21:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:21:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:38.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:21:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:40.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:40.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:41 np0005596060 nova_compute[247421]: 2026-01-26 18:21:41.513 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:42.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:21:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:42.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:21:42 np0005596060 nova_compute[247421]: 2026-01-26 18:21:42.963 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:21:44
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'vms']
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:44.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:44.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:21:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:21:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:46.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:46 np0005596060 nova_compute[247421]: 2026-01-26 18:21:46.515 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:46.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:47 np0005596060 nova_compute[247421]: 2026-01-26 18:21:47.966 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:21:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:48.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:21:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:21:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:48.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:21:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:50.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:50.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:51 np0005596060 nova_compute[247421]: 2026-01-26 18:21:51.516 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:21:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:52.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:21:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:52.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:52 np0005596060 nova_compute[247421]: 2026-01-26 18:21:52.968 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:21:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:54.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:54.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:21:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:56.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:56 np0005596060 nova_compute[247421]: 2026-01-26 18:21:56.521 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:21:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:56.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:21:57 np0005596060 nova_compute[247421]: 2026-01-26 18:21:57.970 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:21:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:21:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:21:58.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:21:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:21:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:21:58.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:21:58 np0005596060 podman[274625]: 2026-01-26 18:21:58.788378384 +0000 UTC m=+0.049228576 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 13:21:58 np0005596060 podman[274626]: 2026-01-26 18:21:58.815105425 +0000 UTC m=+0.075744992 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:21:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:22:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:00.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:00.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:01 np0005596060 nova_compute[247421]: 2026-01-26 18:22:01.523 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:22:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:02.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:22:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:02.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:22:02 np0005596060 nova_compute[247421]: 2026-01-26 18:22:02.973 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:03 np0005596060 nova_compute[247421]: 2026-01-26 18:22:03.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:03 np0005596060 nova_compute[247421]: 2026-01-26 18:22:03.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:03 np0005596060 nova_compute[247421]: 2026-01-26 18:22:03.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:22:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:22:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:22:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:04.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:05 np0005596060 nova_compute[247421]: 2026-01-26 18:22:05.653 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:22:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:06 np0005596060 nova_compute[247421]: 2026-01-26 18:22:06.524 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:22:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:06.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:22:06 np0005596060 nova_compute[247421]: 2026-01-26 18:22:06.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:07 np0005596060 nova_compute[247421]: 2026-01-26 18:22:07.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:07 np0005596060 nova_compute[247421]: 2026-01-26 18:22:07.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:07 np0005596060 nova_compute[247421]: 2026-01-26 18:22:07.976 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:22:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:08.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.417323) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451728417357, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2157, "num_deletes": 254, "total_data_size": 3946545, "memory_usage": 3999136, "flush_reason": "Manual Compaction"}
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451728437745, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3866524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29247, "largest_seqno": 31402, "table_properties": {"data_size": 3856718, "index_size": 6300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19873, "raw_average_key_size": 20, "raw_value_size": 3837177, "raw_average_value_size": 3951, "num_data_blocks": 275, "num_entries": 971, "num_filter_entries": 971, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769451512, "oldest_key_time": 1769451512, "file_creation_time": 1769451728, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 20561 microseconds, and 8314 cpu microseconds.
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.437878) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3866524 bytes OK
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.437931) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.440233) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.440247) EVENT_LOG_v1 {"time_micros": 1769451728440243, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.440263) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3937840, prev total WAL file size 3937840, number of live WAL files 2.
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.441633) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3775KB)], [65(9236KB)]
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451728441664, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 13324932, "oldest_snapshot_seqno": -1}
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5858 keys, 11221898 bytes, temperature: kUnknown
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451728536619, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 11221898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11180417, "index_size": 25755, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14661, "raw_key_size": 148202, "raw_average_key_size": 25, "raw_value_size": 11072513, "raw_average_value_size": 1890, "num_data_blocks": 1046, "num_entries": 5858, "num_filter_entries": 5858, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769451728, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.536919) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 11221898 bytes
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.538259) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.2 rd, 118.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 9.0 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 6383, records dropped: 525 output_compression: NoCompression
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.538281) EVENT_LOG_v1 {"time_micros": 1769451728538271, "job": 36, "event": "compaction_finished", "compaction_time_micros": 95062, "compaction_time_cpu_micros": 26437, "output_level": 6, "num_output_files": 1, "total_output_size": 11221898, "num_input_records": 6383, "num_output_records": 5858, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451728539507, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451728542119, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.441569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.542250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.542255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.542258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.542261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:22:08 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:22:08.542263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:22:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:08.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:08 np0005596060 nova_compute[247421]: 2026-01-26 18:22:08.682 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:08 np0005596060 nova_compute[247421]: 2026-01-26 18:22:08.683 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:22:08 np0005596060 nova_compute[247421]: 2026-01-26 18:22:08.683 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:22:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:09 np0005596060 nova_compute[247421]: 2026-01-26 18:22:09.361 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:22:09 np0005596060 nova_compute[247421]: 2026-01-26 18:22:09.362 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:10.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:10.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:11 np0005596060 nova_compute[247421]: 2026-01-26 18:22:11.526 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:11 np0005596060 nova_compute[247421]: 2026-01-26 18:22:11.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:11 np0005596060 nova_compute[247421]: 2026-01-26 18:22:11.672 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:22:11 np0005596060 nova_compute[247421]: 2026-01-26 18:22:11.672 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:22:11 np0005596060 nova_compute[247421]: 2026-01-26 18:22:11.672 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:22:11 np0005596060 nova_compute[247421]: 2026-01-26 18:22:11.672 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:22:11 np0005596060 nova_compute[247421]: 2026-01-26 18:22:11.673 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:22:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:22:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2397939116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.078 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.224 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.225 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4823MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.225 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.225 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:22:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:12.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.490 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.490 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:22:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:12.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.611 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing inventories for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.632 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating ProviderTree inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.633 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.651 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing aggregate associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.674 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing trait associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.708 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:22:12 np0005596060 nova_compute[247421]: 2026-01-26 18:22:12.978 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:22:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1244757010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:22:13 np0005596060 nova_compute[247421]: 2026-01-26 18:22:13.137 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:22:13 np0005596060 nova_compute[247421]: 2026-01-26 18:22:13.143 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:22:13 np0005596060 nova_compute[247421]: 2026-01-26 18:22:13.167 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:22:13 np0005596060 nova_compute[247421]: 2026-01-26 18:22:13.169 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:22:13 np0005596060 nova_compute[247421]: 2026-01-26 18:22:13.169 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:22:13 np0005596060 nova_compute[247421]: 2026-01-26 18:22:13.170 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:13 np0005596060 nova_compute[247421]: 2026-01-26 18:22:13.170 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 26 13:22:13 np0005596060 nova_compute[247421]: 2026-01-26 18:22:13.185 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:22:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:22:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:22:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:22:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:22:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:22:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:14.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:14.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:22:14.752 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:22:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:22:14.752 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:22:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:22:14.752 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:22:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:16.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:16 np0005596060 nova_compute[247421]: 2026-01-26 18:22:16.527 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:16.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:17 np0005596060 nova_compute[247421]: 2026-01-26 18:22:17.980 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:22:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629779301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:22:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:22:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3629779301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:22:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 26 13:22:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:22:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:18.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:22:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:18.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:20 np0005596060 nova_compute[247421]: 2026-01-26 18:22:20.203 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 41 MiB data, 274 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 26 13:22:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:20.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:20.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:20 np0005596060 nova_compute[247421]: 2026-01-26 18:22:20.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:20 np0005596060 nova_compute[247421]: 2026-01-26 18:22:20.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 26 13:22:20 np0005596060 nova_compute[247421]: 2026-01-26 18:22:20.674 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 26 13:22:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:22:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:22:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:22:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:22:21 np0005596060 nova_compute[247421]: 2026-01-26 18:22:21.529 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:22:21 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev cbede4ec-f97f-4655-80c0-a5b1733f8665 does not exist
Jan 26 13:22:21 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d74d3f24-346a-4ff9-bab8-3a67cb479eeb does not exist
Jan 26 13:22:21 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 48460b5a-ed27-4148-854c-48307af7baa8 does not exist
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:22:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:22:22 np0005596060 podman[275052]: 2026-01-26 18:22:22.317845827 +0000 UTC m=+0.048116627 container create 513b9ff7915bd1348703f489a16f541a23134a609f1d686c615e524a7665e1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 13:22:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 26 13:22:22 np0005596060 systemd[1]: Started libpod-conmon-513b9ff7915bd1348703f489a16f541a23134a609f1d686c615e524a7665e1ae.scope.
Jan 26 13:22:22 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:22:22 np0005596060 podman[275052]: 2026-01-26 18:22:22.293799954 +0000 UTC m=+0.024070774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:22:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:22.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:22 np0005596060 podman[275052]: 2026-01-26 18:22:22.40584499 +0000 UTC m=+0.136115800 container init 513b9ff7915bd1348703f489a16f541a23134a609f1d686c615e524a7665e1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:22:22 np0005596060 podman[275052]: 2026-01-26 18:22:22.414251425 +0000 UTC m=+0.144522215 container start 513b9ff7915bd1348703f489a16f541a23134a609f1d686c615e524a7665e1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:22:22 np0005596060 podman[275052]: 2026-01-26 18:22:22.417417655 +0000 UTC m=+0.147688555 container attach 513b9ff7915bd1348703f489a16f541a23134a609f1d686c615e524a7665e1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gagarin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:22:22 np0005596060 brave_gagarin[275068]: 167 167
Jan 26 13:22:22 np0005596060 systemd[1]: libpod-513b9ff7915bd1348703f489a16f541a23134a609f1d686c615e524a7665e1ae.scope: Deactivated successfully.
Jan 26 13:22:22 np0005596060 conmon[275068]: conmon 513b9ff7915bd1348703 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-513b9ff7915bd1348703f489a16f541a23134a609f1d686c615e524a7665e1ae.scope/container/memory.events
Jan 26 13:22:22 np0005596060 podman[275052]: 2026-01-26 18:22:22.421817758 +0000 UTC m=+0.152088548 container died 513b9ff7915bd1348703f489a16f541a23134a609f1d686c615e524a7665e1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gagarin, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:22:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fc81866002b7ab8f2d99cce279eac28ae753502961841b5ea194f206cf4cfc57-merged.mount: Deactivated successfully.
Jan 26 13:22:22 np0005596060 podman[275052]: 2026-01-26 18:22:22.462880604 +0000 UTC m=+0.193151394 container remove 513b9ff7915bd1348703f489a16f541a23134a609f1d686c615e524a7665e1ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:22:22 np0005596060 systemd[1]: libpod-conmon-513b9ff7915bd1348703f489a16f541a23134a609f1d686c615e524a7665e1ae.scope: Deactivated successfully.
Jan 26 13:22:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:22.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:22 np0005596060 podman[275093]: 2026-01-26 18:22:22.637365102 +0000 UTC m=+0.038647916 container create a4854cfe22dcb62d177df1327395cd7de26228e46db8fa39c051496c6952024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:22:22 np0005596060 systemd[1]: Started libpod-conmon-a4854cfe22dcb62d177df1327395cd7de26228e46db8fa39c051496c6952024e.scope.
Jan 26 13:22:22 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:22:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f924040e7368fe51895f123eb12163ffc8ee9544b1c146e5ced15f7976d3d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f924040e7368fe51895f123eb12163ffc8ee9544b1c146e5ced15f7976d3d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f924040e7368fe51895f123eb12163ffc8ee9544b1c146e5ced15f7976d3d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f924040e7368fe51895f123eb12163ffc8ee9544b1c146e5ced15f7976d3d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42f924040e7368fe51895f123eb12163ffc8ee9544b1c146e5ced15f7976d3d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:22 np0005596060 podman[275093]: 2026-01-26 18:22:22.715879344 +0000 UTC m=+0.117162188 container init a4854cfe22dcb62d177df1327395cd7de26228e46db8fa39c051496c6952024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:22:22 np0005596060 podman[275093]: 2026-01-26 18:22:22.621993131 +0000 UTC m=+0.023275955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:22:22 np0005596060 podman[275093]: 2026-01-26 18:22:22.72238653 +0000 UTC m=+0.123669344 container start a4854cfe22dcb62d177df1327395cd7de26228e46db8fa39c051496c6952024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:22:22 np0005596060 podman[275093]: 2026-01-26 18:22:22.725952601 +0000 UTC m=+0.127235415 container attach a4854cfe22dcb62d177df1327395cd7de26228e46db8fa39c051496c6952024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:22:23 np0005596060 nova_compute[247421]: 2026-01-26 18:22:23.032 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:23 np0005596060 naughty_mayer[275109]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:22:23 np0005596060 naughty_mayer[275109]: --> relative data size: 1.0
Jan 26 13:22:23 np0005596060 naughty_mayer[275109]: --> All data devices are unavailable
Jan 26 13:22:23 np0005596060 systemd[1]: libpod-a4854cfe22dcb62d177df1327395cd7de26228e46db8fa39c051496c6952024e.scope: Deactivated successfully.
Jan 26 13:22:23 np0005596060 podman[275093]: 2026-01-26 18:22:23.604749014 +0000 UTC m=+1.006031848 container died a4854cfe22dcb62d177df1327395cd7de26228e46db8fa39c051496c6952024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:22:23 np0005596060 systemd[1]: var-lib-containers-storage-overlay-42f924040e7368fe51895f123eb12163ffc8ee9544b1c146e5ced15f7976d3d0-merged.mount: Deactivated successfully.
Jan 26 13:22:23 np0005596060 podman[275093]: 2026-01-26 18:22:23.657667603 +0000 UTC m=+1.058950417 container remove a4854cfe22dcb62d177df1327395cd7de26228e46db8fa39c051496c6952024e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 13:22:23 np0005596060 systemd[1]: libpod-conmon-a4854cfe22dcb62d177df1327395cd7de26228e46db8fa39c051496c6952024e.scope: Deactivated successfully.
Jan 26 13:22:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 26 13:22:24 np0005596060 podman[275279]: 2026-01-26 18:22:24.280264884 +0000 UTC m=+0.027350818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:22:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:22:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:24.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:22:24 np0005596060 podman[275279]: 2026-01-26 18:22:24.454648429 +0000 UTC m=+0.201734363 container create 4c4d0f6050db47f690d7465331a18e25f19f43f9584ac40d38cc3d93991f2e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:22:24 np0005596060 systemd[1]: Started libpod-conmon-4c4d0f6050db47f690d7465331a18e25f19f43f9584ac40d38cc3d93991f2e66.scope.
Jan 26 13:22:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:22:24 np0005596060 podman[275279]: 2026-01-26 18:22:24.557839109 +0000 UTC m=+0.304925073 container init 4c4d0f6050db47f690d7465331a18e25f19f43f9584ac40d38cc3d93991f2e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:22:24 np0005596060 podman[275279]: 2026-01-26 18:22:24.567981998 +0000 UTC m=+0.315067942 container start 4c4d0f6050db47f690d7465331a18e25f19f43f9584ac40d38cc3d93991f2e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 13:22:24 np0005596060 podman[275279]: 2026-01-26 18:22:24.571255111 +0000 UTC m=+0.318341085 container attach 4c4d0f6050db47f690d7465331a18e25f19f43f9584ac40d38cc3d93991f2e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 26 13:22:24 np0005596060 xenodochial_galois[275295]: 167 167
Jan 26 13:22:24 np0005596060 systemd[1]: libpod-4c4d0f6050db47f690d7465331a18e25f19f43f9584ac40d38cc3d93991f2e66.scope: Deactivated successfully.
Jan 26 13:22:24 np0005596060 conmon[275295]: conmon 4c4d0f6050db47f690d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4c4d0f6050db47f690d7465331a18e25f19f43f9584ac40d38cc3d93991f2e66.scope/container/memory.events
Jan 26 13:22:24 np0005596060 podman[275279]: 2026-01-26 18:22:24.576231648 +0000 UTC m=+0.323317582 container died 4c4d0f6050db47f690d7465331a18e25f19f43f9584ac40d38cc3d93991f2e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:22:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-5549ac228ca43b741bcbaf63ceeceee0eaf5a083dcf0978c309af05038f2dacb-merged.mount: Deactivated successfully.
Jan 26 13:22:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:24.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:24 np0005596060 podman[275279]: 2026-01-26 18:22:24.619784948 +0000 UTC m=+0.366870892 container remove 4c4d0f6050db47f690d7465331a18e25f19f43f9584ac40d38cc3d93991f2e66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_galois, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:22:24 np0005596060 systemd[1]: libpod-conmon-4c4d0f6050db47f690d7465331a18e25f19f43f9584ac40d38cc3d93991f2e66.scope: Deactivated successfully.
Jan 26 13:22:24 np0005596060 podman[275319]: 2026-01-26 18:22:24.803288176 +0000 UTC m=+0.051177426 container create 079e7442a247df387438ce3090cd3168befb3bb85d5b971e99ae4f90bf26415a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 13:22:24 np0005596060 systemd[1]: Started libpod-conmon-079e7442a247df387438ce3090cd3168befb3bb85d5b971e99ae4f90bf26415a.scope.
Jan 26 13:22:24 np0005596060 podman[275319]: 2026-01-26 18:22:24.775466437 +0000 UTC m=+0.023355697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:22:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:22:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e116a04ad105ca20e93f90d4d4dba521d8dcc846d6741a141f5f35f65ed38b17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e116a04ad105ca20e93f90d4d4dba521d8dcc846d6741a141f5f35f65ed38b17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e116a04ad105ca20e93f90d4d4dba521d8dcc846d6741a141f5f35f65ed38b17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e116a04ad105ca20e93f90d4d4dba521d8dcc846d6741a141f5f35f65ed38b17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:24 np0005596060 podman[275319]: 2026-01-26 18:22:24.908074707 +0000 UTC m=+0.155964007 container init 079e7442a247df387438ce3090cd3168befb3bb85d5b971e99ae4f90bf26415a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:22:24 np0005596060 podman[275319]: 2026-01-26 18:22:24.916235335 +0000 UTC m=+0.164124595 container start 079e7442a247df387438ce3090cd3168befb3bb85d5b971e99ae4f90bf26415a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:22:24 np0005596060 podman[275319]: 2026-01-26 18:22:24.92032947 +0000 UTC m=+0.168218710 container attach 079e7442a247df387438ce3090cd3168befb3bb85d5b971e99ae4f90bf26415a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]: {
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:    "1": [
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:        {
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "devices": [
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "/dev/loop3"
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            ],
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "lv_name": "ceph_lv0",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "lv_size": "7511998464",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "name": "ceph_lv0",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "tags": {
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.cluster_name": "ceph",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.crush_device_class": "",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.encrypted": "0",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.osd_id": "1",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.type": "block",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:                "ceph.vdo": "0"
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            },
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "type": "block",
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:            "vg_name": "ceph_vg0"
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:        }
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]:    ]
Jan 26 13:22:25 np0005596060 funny_jepsen[275335]: }
Jan 26 13:22:25 np0005596060 systemd[1]: libpod-079e7442a247df387438ce3090cd3168befb3bb85d5b971e99ae4f90bf26415a.scope: Deactivated successfully.
Jan 26 13:22:25 np0005596060 podman[275319]: 2026-01-26 18:22:25.742518789 +0000 UTC m=+0.990408119 container died 079e7442a247df387438ce3090cd3168befb3bb85d5b971e99ae4f90bf26415a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 13:22:25 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e116a04ad105ca20e93f90d4d4dba521d8dcc846d6741a141f5f35f65ed38b17-merged.mount: Deactivated successfully.
Jan 26 13:22:25 np0005596060 podman[275319]: 2026-01-26 18:22:25.809467506 +0000 UTC m=+1.057356746 container remove 079e7442a247df387438ce3090cd3168befb3bb85d5b971e99ae4f90bf26415a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:22:25 np0005596060 systemd[1]: libpod-conmon-079e7442a247df387438ce3090cd3168befb3bb85d5b971e99ae4f90bf26415a.scope: Deactivated successfully.
Jan 26 13:22:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 26 13:22:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:26.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:26 np0005596060 nova_compute[247421]: 2026-01-26 18:22:26.531 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:26 np0005596060 podman[275500]: 2026-01-26 18:22:26.589983373 +0000 UTC m=+0.054897960 container create bec43e1188b75c30f052d6014676a2eb94566bf577cbdcaf3d825e366b892d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:22:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:26.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:26 np0005596060 systemd[1]: Started libpod-conmon-bec43e1188b75c30f052d6014676a2eb94566bf577cbdcaf3d825e366b892d42.scope.
Jan 26 13:22:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:22:26 np0005596060 podman[275500]: 2026-01-26 18:22:26.562922333 +0000 UTC m=+0.027836930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:22:26 np0005596060 podman[275500]: 2026-01-26 18:22:26.67146086 +0000 UTC m=+0.136375857 container init bec43e1188b75c30f052d6014676a2eb94566bf577cbdcaf3d825e366b892d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:22:26 np0005596060 podman[275500]: 2026-01-26 18:22:26.677388431 +0000 UTC m=+0.142302978 container start bec43e1188b75c30f052d6014676a2eb94566bf577cbdcaf3d825e366b892d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:22:26 np0005596060 podman[275500]: 2026-01-26 18:22:26.680887341 +0000 UTC m=+0.145801928 container attach bec43e1188b75c30f052d6014676a2eb94566bf577cbdcaf3d825e366b892d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 13:22:26 np0005596060 condescending_rhodes[275517]: 167 167
Jan 26 13:22:26 np0005596060 systemd[1]: libpod-bec43e1188b75c30f052d6014676a2eb94566bf577cbdcaf3d825e366b892d42.scope: Deactivated successfully.
Jan 26 13:22:26 np0005596060 conmon[275517]: conmon bec43e1188b75c30f052 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bec43e1188b75c30f052d6014676a2eb94566bf577cbdcaf3d825e366b892d42.scope/container/memory.events
Jan 26 13:22:26 np0005596060 podman[275500]: 2026-01-26 18:22:26.68439833 +0000 UTC m=+0.149312877 container died bec43e1188b75c30f052d6014676a2eb94566bf577cbdcaf3d825e366b892d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:22:26 np0005596060 systemd[1]: var-lib-containers-storage-overlay-056268c4f553b95064d910fda4099f701792ae502e92eff5c2316af22182fbe2-merged.mount: Deactivated successfully.
Jan 26 13:22:26 np0005596060 podman[275500]: 2026-01-26 18:22:26.728437663 +0000 UTC m=+0.193352200 container remove bec43e1188b75c30f052d6014676a2eb94566bf577cbdcaf3d825e366b892d42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_rhodes, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:22:26 np0005596060 systemd[1]: libpod-conmon-bec43e1188b75c30f052d6014676a2eb94566bf577cbdcaf3d825e366b892d42.scope: Deactivated successfully.
Jan 26 13:22:26 np0005596060 podman[275540]: 2026-01-26 18:22:26.918293903 +0000 UTC m=+0.041476719 container create a9d7454f8e3dda418562d66934e9a6a4cb196ce223b3461165443b0fe2351fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:22:26 np0005596060 systemd[1]: Started libpod-conmon-a9d7454f8e3dda418562d66934e9a6a4cb196ce223b3461165443b0fe2351fe4.scope.
Jan 26 13:22:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:22:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ede90435f546614e4244959e15e451e6ea80976cfa513251576ece0d33a66f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ede90435f546614e4244959e15e451e6ea80976cfa513251576ece0d33a66f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ede90435f546614e4244959e15e451e6ea80976cfa513251576ece0d33a66f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73ede90435f546614e4244959e15e451e6ea80976cfa513251576ece0d33a66f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:22:26 np0005596060 podman[275540]: 2026-01-26 18:22:26.902414508 +0000 UTC m=+0.025597324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:22:27 np0005596060 podman[275540]: 2026-01-26 18:22:27.003502025 +0000 UTC m=+0.126684861 container init a9d7454f8e3dda418562d66934e9a6a4cb196ce223b3461165443b0fe2351fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendel, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 13:22:27 np0005596060 podman[275540]: 2026-01-26 18:22:27.019573885 +0000 UTC m=+0.142756721 container start a9d7454f8e3dda418562d66934e9a6a4cb196ce223b3461165443b0fe2351fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendel, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 13:22:27 np0005596060 podman[275540]: 2026-01-26 18:22:27.023654219 +0000 UTC m=+0.146837075 container attach a9d7454f8e3dda418562d66934e9a6a4cb196ce223b3461165443b0fe2351fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendel, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 13:22:27 np0005596060 suspicious_mendel[275556]: {
Jan 26 13:22:27 np0005596060 suspicious_mendel[275556]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:22:27 np0005596060 suspicious_mendel[275556]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:22:27 np0005596060 suspicious_mendel[275556]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:22:27 np0005596060 suspicious_mendel[275556]:        "osd_id": 1,
Jan 26 13:22:27 np0005596060 suspicious_mendel[275556]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:22:27 np0005596060 suspicious_mendel[275556]:        "type": "bluestore"
Jan 26 13:22:27 np0005596060 suspicious_mendel[275556]:    }
Jan 26 13:22:27 np0005596060 suspicious_mendel[275556]: }
Jan 26 13:22:27 np0005596060 systemd[1]: libpod-a9d7454f8e3dda418562d66934e9a6a4cb196ce223b3461165443b0fe2351fe4.scope: Deactivated successfully.
Jan 26 13:22:27 np0005596060 podman[275540]: 2026-01-26 18:22:27.876617413 +0000 UTC m=+0.999800249 container died a9d7454f8e3dda418562d66934e9a6a4cb196ce223b3461165443b0fe2351fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 13:22:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-73ede90435f546614e4244959e15e451e6ea80976cfa513251576ece0d33a66f-merged.mount: Deactivated successfully.
Jan 26 13:22:27 np0005596060 podman[275540]: 2026-01-26 18:22:27.940270115 +0000 UTC m=+1.063452911 container remove a9d7454f8e3dda418562d66934e9a6a4cb196ce223b3461165443b0fe2351fe4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_mendel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:22:27 np0005596060 systemd[1]: libpod-conmon-a9d7454f8e3dda418562d66934e9a6a4cb196ce223b3461165443b0fe2351fe4.scope: Deactivated successfully.
Jan 26 13:22:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:22:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:22:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:22:28 np0005596060 nova_compute[247421]: 2026-01-26 18:22:28.035 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:22:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8a384abc-5f8e-4629-ba04-1d25a0283a56 does not exist
Jan 26 13:22:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 67a470ec-3800-4e76-ae72-00a493c1a508 does not exist
Jan 26 13:22:28 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 77976b9d-8d8f-4778-900f-650c14b22e1f does not exist
Jan 26 13:22:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Jan 26 13:22:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:28.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:28.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:22:29 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:22:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:29 np0005596060 podman[275639]: 2026-01-26 18:22:29.791798444 +0000 UTC m=+0.054393227 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:22:29 np0005596060 podman[275640]: 2026-01-26 18:22:29.845407121 +0000 UTC m=+0.098487372 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:22:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 26 13:22:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:22:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:30.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:22:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:30.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:31 np0005596060 nova_compute[247421]: 2026-01-26 18:22:31.533 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 9.9 KiB/s rd, 255 B/s wr, 13 op/s
Jan 26 13:22:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:22:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:32.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:22:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:32.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:33 np0005596060 nova_compute[247421]: 2026-01-26 18:22:33.037 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 26 13:22:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:22:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:34.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:22:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:34.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:36 np0005596060 nova_compute[247421]: 2026-01-26 18:22:36.535 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:36.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:37 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:22:37.848 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:22:37 np0005596060 nova_compute[247421]: 2026-01-26 18:22:37.848 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:37 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:22:37.849 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:22:38 np0005596060 nova_compute[247421]: 2026-01-26 18:22:38.039 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:22:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:22:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 26 13:22:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:38.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 26 13:22:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:40.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:22:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:40.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:22:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:22:40.850 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:22:41 np0005596060 nova_compute[247421]: 2026-01-26 18:22:41.577 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:42.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:42.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:43 np0005596060 nova_compute[247421]: 2026-01-26 18:22:43.041 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:22:44
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'volumes', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta']
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:22:44 np0005596060 nova_compute[247421]: 2026-01-26 18:22:44.263 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:44.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:44.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:22:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:22:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:46.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:46 np0005596060 nova_compute[247421]: 2026-01-26 18:22:46.579 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:46.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:48 np0005596060 nova_compute[247421]: 2026-01-26 18:22:48.043 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:48.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:48.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:50.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:50.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:51 np0005596060 nova_compute[247421]: 2026-01-26 18:22:51.582 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:22:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:52.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:22:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:52.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:53 np0005596060 nova_compute[247421]: 2026-01-26 18:22:53.045 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 26 13:22:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:54.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:22:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:54.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 41 MiB data, 278 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:22:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:56.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:56 np0005596060 nova_compute[247421]: 2026-01-26 18:22:56.585 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:56.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:58 np0005596060 nova_compute[247421]: 2026-01-26 18:22:58.047 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:22:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 26 13:22:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:22:58.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:22:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:22:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:22:58.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:22:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 26 13:23:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:00.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:00.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:00 np0005596060 podman[275802]: 2026-01-26 18:23:00.827253534 +0000 UTC m=+0.073432833 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 13:23:00 np0005596060 podman[275803]: 2026-01-26 18:23:00.858275714 +0000 UTC m=+0.105314945 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:23:01 np0005596060 nova_compute[247421]: 2026-01-26 18:23:01.588 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 26 13:23:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:02.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:23:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:02.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:23:03 np0005596060 nova_compute[247421]: 2026-01-26 18:23:03.048 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:23:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:23:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 26 13:23:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:04.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:04 np0005596060 nova_compute[247421]: 2026-01-26 18:23:04.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:23:04 np0005596060 nova_compute[247421]: 2026-01-26 18:23:04.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:23:04 np0005596060 nova_compute[247421]: 2026-01-26 18:23:04.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:23:05 np0005596060 nova_compute[247421]: 2026-01-26 18:23:05.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:23:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 26 13:23:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:06.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:06 np0005596060 nova_compute[247421]: 2026-01-26 18:23:06.590 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:06.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:08 np0005596060 nova_compute[247421]: 2026-01-26 18:23:08.050 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 100 KiB/s rd, 0 B/s wr, 167 op/s
Jan 26 13:23:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:08.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:08 np0005596060 nova_compute[247421]: 2026-01-26 18:23:08.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:23:08 np0005596060 nova_compute[247421]: 2026-01-26 18:23:08.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:23:08 np0005596060 nova_compute[247421]: 2026-01-26 18:23:08.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:23:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:08.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:08 np0005596060 nova_compute[247421]: 2026-01-26 18:23:08.664 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:23:08 np0005596060 nova_compute[247421]: 2026-01-26 18:23:08.664 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:23:08 np0005596060 nova_compute[247421]: 2026-01-26 18:23:08.665 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.481509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451789481626, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1037, "num_deletes": 505, "total_data_size": 1105021, "memory_usage": 1133608, "flush_reason": "Manual Compaction"}
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451789491396, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 725128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31403, "largest_seqno": 32439, "table_properties": {"data_size": 721111, "index_size": 1221, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13152, "raw_average_key_size": 19, "raw_value_size": 710643, "raw_average_value_size": 1031, "num_data_blocks": 53, "num_entries": 689, "num_filter_entries": 689, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769451729, "oldest_key_time": 1769451729, "file_creation_time": 1769451789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 9962 microseconds, and 5937 cpu microseconds.
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.491483) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 725128 bytes OK
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.491518) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.493549) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.493585) EVENT_LOG_v1 {"time_micros": 1769451789493573, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.493621) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 1099153, prev total WAL file size 1099153, number of live WAL files 2.
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.494657) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(708KB)], [68(10MB)]
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451789494728, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11947026, "oldest_snapshot_seqno": -1}
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5546 keys, 8352781 bytes, temperature: kUnknown
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451789575223, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8352781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8316817, "index_size": 21006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 143072, "raw_average_key_size": 25, "raw_value_size": 8217807, "raw_average_value_size": 1481, "num_data_blocks": 844, "num_entries": 5546, "num_filter_entries": 5546, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769451789, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.575730) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8352781 bytes
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.577245) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.0 rd, 103.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 10.7 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(28.0) write-amplify(11.5) OK, records in: 6547, records dropped: 1001 output_compression: NoCompression
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.577280) EVENT_LOG_v1 {"time_micros": 1769451789577263, "job": 38, "event": "compaction_finished", "compaction_time_micros": 80709, "compaction_time_cpu_micros": 43018, "output_level": 6, "num_output_files": 1, "total_output_size": 8352781, "num_input_records": 6547, "num_output_records": 5546, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451789577902, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451789581875, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.494539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.582042) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.582052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.582057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.582061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:23:09 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:23:09.582065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:23:09 np0005596060 nova_compute[247421]: 2026-01-26 18:23:09.659 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:23:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:23:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:23:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:10.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:23:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 26 13:23:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:10.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 26 13:23:11 np0005596060 nova_compute[247421]: 2026-01-26 18:23:11.639 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:11 np0005596060 nova_compute[247421]: 2026-01-26 18:23:11.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:23:11 np0005596060 nova_compute[247421]: 2026-01-26 18:23:11.676 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:23:11 np0005596060 nova_compute[247421]: 2026-01-26 18:23:11.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:23:11 np0005596060 nova_compute[247421]: 2026-01-26 18:23:11.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:23:11 np0005596060 nova_compute[247421]: 2026-01-26 18:23:11.677 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:23:11 np0005596060 nova_compute[247421]: 2026-01-26 18:23:11.678 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:23:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:23:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3443951612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:23:12 np0005596060 nova_compute[247421]: 2026-01-26 18:23:12.180 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:23:12 np0005596060 nova_compute[247421]: 2026-01-26 18:23:12.369 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:23:12 np0005596060 nova_compute[247421]: 2026-01-26 18:23:12.371 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4825MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:23:12 np0005596060 nova_compute[247421]: 2026-01-26 18:23:12.371 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:23:12 np0005596060 nova_compute[247421]: 2026-01-26 18:23:12.371 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:23:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:23:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:12.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:12 np0005596060 nova_compute[247421]: 2026-01-26 18:23:12.568 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:23:12 np0005596060 nova_compute[247421]: 2026-01-26 18:23:12.569 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:23:12 np0005596060 nova_compute[247421]: 2026-01-26 18:23:12.587 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:23:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:12.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:13 np0005596060 nova_compute[247421]: 2026-01-26 18:23:13.053 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:23:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3385640892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:23:13 np0005596060 nova_compute[247421]: 2026-01-26 18:23:13.116 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:23:13 np0005596060 nova_compute[247421]: 2026-01-26 18:23:13.125 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:23:13 np0005596060 nova_compute[247421]: 2026-01-26 18:23:13.241 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:23:13 np0005596060 nova_compute[247421]: 2026-01-26 18:23:13.243 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:23:13 np0005596060 nova_compute[247421]: 2026-01-26 18:23:13.243 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:23:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:23:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:23:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:23:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:23:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:23:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:23:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:23:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:14.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:23:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:14.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:23:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:23:14.752 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:23:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:23:14.753 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:23:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:23:14.753 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:23:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:23:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:16.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:16 np0005596060 nova_compute[247421]: 2026-01-26 18:23:16.640 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:16.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:18 np0005596060 nova_compute[247421]: 2026-01-26 18:23:18.056 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 26 13:23:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:18.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:23:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:18.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:23:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:20 np0005596060 nova_compute[247421]: 2026-01-26 18:23:20.245 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:23:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 op/s
Jan 26 13:23:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:20.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:20.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:21 np0005596060 nova_compute[247421]: 2026-01-26 18:23:21.642 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 26 13:23:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:23:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:22.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:23:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:22.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:23 np0005596060 nova_compute[247421]: 2026-01-26 18:23:23.056 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 26 13:23:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:24.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:24.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 26 13:23:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:26.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:26 np0005596060 nova_compute[247421]: 2026-01-26 18:23:26.644 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:26.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:28 np0005596060 nova_compute[247421]: 2026-01-26 18:23:28.113 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 88 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 26 13:23:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:28.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:28.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:23:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:23:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:23:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 88 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 26 13:23:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:30.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:30.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:23:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:23:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:31 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6faa43b3-ff18-4a5c-a936-9bd033537eb7 does not exist
Jan 26 13:23:31 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 455971aa-fe53-449e-a9cc-e3199e9ef5ae does not exist
Jan 26 13:23:31 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 65b0a302-72e7-4f62-af86-cc6dda154ea7 does not exist
Jan 26 13:23:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:23:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:23:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:23:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:23:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:23:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:23:31 np0005596060 podman[276231]: 2026-01-26 18:23:31.268588689 +0000 UTC m=+0.082096214 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:23:31 np0005596060 podman[276232]: 2026-01-26 18:23:31.291127164 +0000 UTC m=+0.104615018 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller)
Jan 26 13:23:31 np0005596060 nova_compute[247421]: 2026-01-26 18:23:31.647 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:31 np0005596060 podman[276394]: 2026-01-26 18:23:31.75355329 +0000 UTC m=+0.063150179 container create a27a69a70e7abd2fab0ed1aef4860cdd887d49d03f945285bd17fc2a8cd32677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 13:23:31 np0005596060 systemd[1]: Started libpod-conmon-a27a69a70e7abd2fab0ed1aef4860cdd887d49d03f945285bd17fc2a8cd32677.scope.
Jan 26 13:23:31 np0005596060 podman[276394]: 2026-01-26 18:23:31.721396402 +0000 UTC m=+0.030993371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:23:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:23:31 np0005596060 podman[276394]: 2026-01-26 18:23:31.850347523 +0000 UTC m=+0.159944442 container init a27a69a70e7abd2fab0ed1aef4860cdd887d49d03f945285bd17fc2a8cd32677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:23:31 np0005596060 podman[276394]: 2026-01-26 18:23:31.858414088 +0000 UTC m=+0.168010987 container start a27a69a70e7abd2fab0ed1aef4860cdd887d49d03f945285bd17fc2a8cd32677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:23:31 np0005596060 podman[276394]: 2026-01-26 18:23:31.863080817 +0000 UTC m=+0.172677716 container attach a27a69a70e7abd2fab0ed1aef4860cdd887d49d03f945285bd17fc2a8cd32677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:23:31 np0005596060 beautiful_heisenberg[276411]: 167 167
Jan 26 13:23:31 np0005596060 systemd[1]: libpod-a27a69a70e7abd2fab0ed1aef4860cdd887d49d03f945285bd17fc2a8cd32677.scope: Deactivated successfully.
Jan 26 13:23:31 np0005596060 podman[276394]: 2026-01-26 18:23:31.868474004 +0000 UTC m=+0.178070923 container died a27a69a70e7abd2fab0ed1aef4860cdd887d49d03f945285bd17fc2a8cd32677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:23:31 np0005596060 systemd[1]: var-lib-containers-storage-overlay-46f645d7acfa62d79c167da13bb099d59c0cc09e95907ccb43faf2c4b84fa6a4-merged.mount: Deactivated successfully.
Jan 26 13:23:31 np0005596060 podman[276394]: 2026-01-26 18:23:31.915312496 +0000 UTC m=+0.224909385 container remove a27a69a70e7abd2fab0ed1aef4860cdd887d49d03f945285bd17fc2a8cd32677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:23:31 np0005596060 systemd[1]: libpod-conmon-a27a69a70e7abd2fab0ed1aef4860cdd887d49d03f945285bd17fc2a8cd32677.scope: Deactivated successfully.
Jan 26 13:23:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:23:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:31 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:23:32 np0005596060 podman[276434]: 2026-01-26 18:23:32.094261418 +0000 UTC m=+0.051183133 container create 21b512f952302bfe5ed9613c8b7870c52e999284ab8b13148dafec83744997d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yonath, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:23:32 np0005596060 systemd[1]: Started libpod-conmon-21b512f952302bfe5ed9613c8b7870c52e999284ab8b13148dafec83744997d5.scope.
Jan 26 13:23:32 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:23:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe98489bae9af9ae8f457081958923175eb60a9f5b38da28511bafa998fdd27e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe98489bae9af9ae8f457081958923175eb60a9f5b38da28511bafa998fdd27e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe98489bae9af9ae8f457081958923175eb60a9f5b38da28511bafa998fdd27e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe98489bae9af9ae8f457081958923175eb60a9f5b38da28511bafa998fdd27e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe98489bae9af9ae8f457081958923175eb60a9f5b38da28511bafa998fdd27e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:32 np0005596060 podman[276434]: 2026-01-26 18:23:32.073181422 +0000 UTC m=+0.030103137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:23:32 np0005596060 podman[276434]: 2026-01-26 18:23:32.177773692 +0000 UTC m=+0.134695417 container init 21b512f952302bfe5ed9613c8b7870c52e999284ab8b13148dafec83744997d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:23:32 np0005596060 podman[276434]: 2026-01-26 18:23:32.189098581 +0000 UTC m=+0.146020296 container start 21b512f952302bfe5ed9613c8b7870c52e999284ab8b13148dafec83744997d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yonath, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 13:23:32 np0005596060 podman[276434]: 2026-01-26 18:23:32.193120703 +0000 UTC m=+0.150042428 container attach 21b512f952302bfe5ed9613c8b7870c52e999284ab8b13148dafec83744997d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Jan 26 13:23:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 88 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 26 13:23:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:32.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:23:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:32.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:23:33 np0005596060 xenodochial_yonath[276452]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:23:33 np0005596060 xenodochial_yonath[276452]: --> relative data size: 1.0
Jan 26 13:23:33 np0005596060 xenodochial_yonath[276452]: --> All data devices are unavailable
Jan 26 13:23:33 np0005596060 systemd[1]: libpod-21b512f952302bfe5ed9613c8b7870c52e999284ab8b13148dafec83744997d5.scope: Deactivated successfully.
Jan 26 13:23:33 np0005596060 podman[276434]: 2026-01-26 18:23:33.11131249 +0000 UTC m=+1.068234215 container died 21b512f952302bfe5ed9613c8b7870c52e999284ab8b13148dafec83744997d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yonath, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 13:23:33 np0005596060 nova_compute[247421]: 2026-01-26 18:23:33.115 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:33 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fe98489bae9af9ae8f457081958923175eb60a9f5b38da28511bafa998fdd27e-merged.mount: Deactivated successfully.
Jan 26 13:23:33 np0005596060 podman[276434]: 2026-01-26 18:23:33.201490954 +0000 UTC m=+1.158412659 container remove 21b512f952302bfe5ed9613c8b7870c52e999284ab8b13148dafec83744997d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_yonath, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:23:33 np0005596060 systemd[1]: libpod-conmon-21b512f952302bfe5ed9613c8b7870c52e999284ab8b13148dafec83744997d5.scope: Deactivated successfully.
Jan 26 13:23:33 np0005596060 podman[276619]: 2026-01-26 18:23:33.907016903 +0000 UTC m=+0.047926060 container create f9e587f39131cce1402d07a66367f4ad8d7aab3b504adb5c3dbc7c9e55919f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 13:23:33 np0005596060 systemd[1]: Started libpod-conmon-f9e587f39131cce1402d07a66367f4ad8d7aab3b504adb5c3dbc7c9e55919f4b.scope.
Jan 26 13:23:33 np0005596060 podman[276619]: 2026-01-26 18:23:33.889160048 +0000 UTC m=+0.030069235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:23:33 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:23:34 np0005596060 podman[276619]: 2026-01-26 18:23:34.003978969 +0000 UTC m=+0.144888146 container init f9e587f39131cce1402d07a66367f4ad8d7aab3b504adb5c3dbc7c9e55919f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:23:34 np0005596060 podman[276619]: 2026-01-26 18:23:34.017093753 +0000 UTC m=+0.158002910 container start f9e587f39131cce1402d07a66367f4ad8d7aab3b504adb5c3dbc7c9e55919f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 26 13:23:34 np0005596060 podman[276619]: 2026-01-26 18:23:34.021914416 +0000 UTC m=+0.162823593 container attach f9e587f39131cce1402d07a66367f4ad8d7aab3b504adb5c3dbc7c9e55919f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:23:34 np0005596060 pedantic_meninsky[276635]: 167 167
Jan 26 13:23:34 np0005596060 systemd[1]: libpod-f9e587f39131cce1402d07a66367f4ad8d7aab3b504adb5c3dbc7c9e55919f4b.scope: Deactivated successfully.
Jan 26 13:23:34 np0005596060 conmon[276635]: conmon f9e587f39131cce1402d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f9e587f39131cce1402d07a66367f4ad8d7aab3b504adb5c3dbc7c9e55919f4b.scope/container/memory.events
Jan 26 13:23:34 np0005596060 podman[276619]: 2026-01-26 18:23:34.026919903 +0000 UTC m=+0.167829060 container died f9e587f39131cce1402d07a66367f4ad8d7aab3b504adb5c3dbc7c9e55919f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 13:23:34 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c82565212632602e2925499e3535da2813aa03b36db52a33c1e8231f1b711752-merged.mount: Deactivated successfully.
Jan 26 13:23:34 np0005596060 podman[276619]: 2026-01-26 18:23:34.068589013 +0000 UTC m=+0.209498190 container remove f9e587f39131cce1402d07a66367f4ad8d7aab3b504adb5c3dbc7c9e55919f4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_meninsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:23:34 np0005596060 systemd[1]: libpod-conmon-f9e587f39131cce1402d07a66367f4ad8d7aab3b504adb5c3dbc7c9e55919f4b.scope: Deactivated successfully.
Jan 26 13:23:34 np0005596060 podman[276659]: 2026-01-26 18:23:34.226658184 +0000 UTC m=+0.024942785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:23:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 88 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 26 13:23:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:23:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:34.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:23:34 np0005596060 podman[276659]: 2026-01-26 18:23:34.573744664 +0000 UTC m=+0.372029245 container create 8b30439463e7fec311a3143ea0130fea06d7438f958aa8278b13b864fd0e2d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:23:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:34.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:34 np0005596060 systemd[1]: Started libpod-conmon-8b30439463e7fec311a3143ea0130fea06d7438f958aa8278b13b864fd0e2d7d.scope.
Jan 26 13:23:34 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:23:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1322848cce7dedd12fa94e56894570768d8e00664d39a89a7a704e00e1d73c00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1322848cce7dedd12fa94e56894570768d8e00664d39a89a7a704e00e1d73c00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1322848cce7dedd12fa94e56894570768d8e00664d39a89a7a704e00e1d73c00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1322848cce7dedd12fa94e56894570768d8e00664d39a89a7a704e00e1d73c00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:35 np0005596060 podman[276659]: 2026-01-26 18:23:35.003749734 +0000 UTC m=+0.802034335 container init 8b30439463e7fec311a3143ea0130fea06d7438f958aa8278b13b864fd0e2d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 13:23:35 np0005596060 podman[276659]: 2026-01-26 18:23:35.01381436 +0000 UTC m=+0.812098941 container start 8b30439463e7fec311a3143ea0130fea06d7438f958aa8278b13b864fd0e2d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:23:35 np0005596060 podman[276659]: 2026-01-26 18:23:35.024760628 +0000 UTC m=+0.823045229 container attach 8b30439463e7fec311a3143ea0130fea06d7438f958aa8278b13b864fd0e2d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]: {
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:    "1": [
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:        {
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "devices": [
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "/dev/loop3"
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            ],
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "lv_name": "ceph_lv0",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "lv_size": "7511998464",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "name": "ceph_lv0",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "tags": {
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.cluster_name": "ceph",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.crush_device_class": "",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.encrypted": "0",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.osd_id": "1",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.type": "block",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:                "ceph.vdo": "0"
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            },
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "type": "block",
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:            "vg_name": "ceph_vg0"
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:        }
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]:    ]
Jan 26 13:23:35 np0005596060 frosty_beaver[276726]: }
Jan 26 13:23:35 np0005596060 systemd[1]: libpod-8b30439463e7fec311a3143ea0130fea06d7438f958aa8278b13b864fd0e2d7d.scope: Deactivated successfully.
Jan 26 13:23:35 np0005596060 podman[276659]: 2026-01-26 18:23:35.775930688 +0000 UTC m=+1.574215269 container died 8b30439463e7fec311a3143ea0130fea06d7438f958aa8278b13b864fd0e2d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:23:35 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1322848cce7dedd12fa94e56894570768d8e00664d39a89a7a704e00e1d73c00-merged.mount: Deactivated successfully.
Jan 26 13:23:35 np0005596060 podman[276659]: 2026-01-26 18:23:35.920579017 +0000 UTC m=+1.718863598 container remove 8b30439463e7fec311a3143ea0130fea06d7438f958aa8278b13b864fd0e2d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:23:35 np0005596060 systemd[1]: libpod-conmon-8b30439463e7fec311a3143ea0130fea06d7438f958aa8278b13b864fd0e2d7d.scope: Deactivated successfully.
Jan 26 13:23:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 88 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 26 13:23:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:36.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:36 np0005596060 podman[276890]: 2026-01-26 18:23:36.507739974 +0000 UTC m=+0.042980085 container create 0044ea3a3f53d819a0ca902eccea6c9c930c3a2a87a1eeaf692b1bee6b2be7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:23:36 np0005596060 systemd[1]: Started libpod-conmon-0044ea3a3f53d819a0ca902eccea6c9c930c3a2a87a1eeaf692b1bee6b2be7a1.scope.
Jan 26 13:23:36 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:23:36 np0005596060 podman[276890]: 2026-01-26 18:23:36.490557976 +0000 UTC m=+0.025798117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:23:36 np0005596060 nova_compute[247421]: 2026-01-26 18:23:36.648 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:36.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:36 np0005596060 podman[276890]: 2026-01-26 18:23:36.752723226 +0000 UTC m=+0.287963417 container init 0044ea3a3f53d819a0ca902eccea6c9c930c3a2a87a1eeaf692b1bee6b2be7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:23:36 np0005596060 podman[276890]: 2026-01-26 18:23:36.758810291 +0000 UTC m=+0.294050402 container start 0044ea3a3f53d819a0ca902eccea6c9c930c3a2a87a1eeaf692b1bee6b2be7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_buck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:23:36 np0005596060 podman[276890]: 2026-01-26 18:23:36.761841928 +0000 UTC m=+0.297082069 container attach 0044ea3a3f53d819a0ca902eccea6c9c930c3a2a87a1eeaf692b1bee6b2be7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_buck, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:23:36 np0005596060 practical_buck[276907]: 167 167
Jan 26 13:23:36 np0005596060 systemd[1]: libpod-0044ea3a3f53d819a0ca902eccea6c9c930c3a2a87a1eeaf692b1bee6b2be7a1.scope: Deactivated successfully.
Jan 26 13:23:36 np0005596060 conmon[276907]: conmon 0044ea3a3f53d819a0ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0044ea3a3f53d819a0ca902eccea6c9c930c3a2a87a1eeaf692b1bee6b2be7a1.scope/container/memory.events
Jan 26 13:23:36 np0005596060 podman[276890]: 2026-01-26 18:23:36.768014735 +0000 UTC m=+0.303254846 container died 0044ea3a3f53d819a0ca902eccea6c9c930c3a2a87a1eeaf692b1bee6b2be7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_buck, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 26 13:23:36 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b9c2c4451bf9859472a00199ccb26c48582674057ee5925791bdfe634834d1e9-merged.mount: Deactivated successfully.
Jan 26 13:23:36 np0005596060 podman[276890]: 2026-01-26 18:23:36.902071075 +0000 UTC m=+0.437311206 container remove 0044ea3a3f53d819a0ca902eccea6c9c930c3a2a87a1eeaf692b1bee6b2be7a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_buck, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 26 13:23:36 np0005596060 systemd[1]: libpod-conmon-0044ea3a3f53d819a0ca902eccea6c9c930c3a2a87a1eeaf692b1bee6b2be7a1.scope: Deactivated successfully.
Jan 26 13:23:37 np0005596060 podman[276931]: 2026-01-26 18:23:37.130070465 +0000 UTC m=+0.056214701 container create e2bba45d919c3b7419796033433f43ad61e00f17a5a00ac9d4660efeaad260fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 13:23:37 np0005596060 systemd[1]: Started libpod-conmon-e2bba45d919c3b7419796033433f43ad61e00f17a5a00ac9d4660efeaad260fe.scope.
Jan 26 13:23:37 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:23:37 np0005596060 podman[276931]: 2026-01-26 18:23:37.104025543 +0000 UTC m=+0.030169789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:23:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be237617a011ef47b76bfefab507b456a9a00b8aa2b536a7e69440d60a581436/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be237617a011ef47b76bfefab507b456a9a00b8aa2b536a7e69440d60a581436/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be237617a011ef47b76bfefab507b456a9a00b8aa2b536a7e69440d60a581436/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be237617a011ef47b76bfefab507b456a9a00b8aa2b536a7e69440d60a581436/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:23:37 np0005596060 podman[276931]: 2026-01-26 18:23:37.21713165 +0000 UTC m=+0.143275866 container init e2bba45d919c3b7419796033433f43ad61e00f17a5a00ac9d4660efeaad260fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 13:23:37 np0005596060 podman[276931]: 2026-01-26 18:23:37.226024306 +0000 UTC m=+0.152168502 container start e2bba45d919c3b7419796033433f43ad61e00f17a5a00ac9d4660efeaad260fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 13:23:37 np0005596060 podman[276931]: 2026-01-26 18:23:37.229924975 +0000 UTC m=+0.156069631 container attach e2bba45d919c3b7419796033433f43ad61e00f17a5a00ac9d4660efeaad260fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:23:38 np0005596060 elated_perlman[276947]: {
Jan 26 13:23:38 np0005596060 elated_perlman[276947]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:23:38 np0005596060 elated_perlman[276947]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:23:38 np0005596060 elated_perlman[276947]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:23:38 np0005596060 elated_perlman[276947]:        "osd_id": 1,
Jan 26 13:23:38 np0005596060 elated_perlman[276947]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:23:38 np0005596060 elated_perlman[276947]:        "type": "bluestore"
Jan 26 13:23:38 np0005596060 elated_perlman[276947]:    }
Jan 26 13:23:38 np0005596060 elated_perlman[276947]: }
Jan 26 13:23:38 np0005596060 systemd[1]: libpod-e2bba45d919c3b7419796033433f43ad61e00f17a5a00ac9d4660efeaad260fe.scope: Deactivated successfully.
Jan 26 13:23:38 np0005596060 podman[276931]: 2026-01-26 18:23:38.054172734 +0000 UTC m=+0.980316930 container died e2bba45d919c3b7419796033433f43ad61e00f17a5a00ac9d4660efeaad260fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 13:23:38 np0005596060 systemd[1]: var-lib-containers-storage-overlay-be237617a011ef47b76bfefab507b456a9a00b8aa2b536a7e69440d60a581436-merged.mount: Deactivated successfully.
Jan 26 13:23:38 np0005596060 podman[276931]: 2026-01-26 18:23:38.107306186 +0000 UTC m=+1.033450382 container remove e2bba45d919c3b7419796033433f43ad61e00f17a5a00ac9d4660efeaad260fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_perlman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:23:38 np0005596060 systemd[1]: libpod-conmon-e2bba45d919c3b7419796033433f43ad61e00f17a5a00ac9d4660efeaad260fe.scope: Deactivated successfully.
Jan 26 13:23:38 np0005596060 nova_compute[247421]: 2026-01-26 18:23:38.118 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:23:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:23:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:38 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4ef4ca6d-caaf-421b-951d-9bd4a04bac31 does not exist
Jan 26 13:23:38 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ad67eb55-9f83-404c-af64-80dbbdc6f82d does not exist
Jan 26 13:23:38 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9f9327b8-10d9-4ac5-acb6-24a210e8b4e7 does not exist
Jan 26 13:23:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 88 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 26 13:23:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:38.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:38.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:39 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:39 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:23:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 88 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:23:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:23:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:40.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:23:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:23:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:40.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:23:41 np0005596060 nova_compute[247421]: 2026-01-26 18:23:41.650 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 88 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:23:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:42.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:42.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:43 np0005596060 nova_compute[247421]: 2026-01-26 18:23:43.122 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 26 13:23:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 26 13:23:43 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:23:44
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'backups', 'vms', '.mgr', 'cephfs.cephfs.data', 'images', '.rgw.root', 'default.rgw.log']
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 88 MiB data, 288 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:23:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:44.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:44.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:23:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:23:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 88 MiB data, 288 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:23:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:46.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:46 np0005596060 nova_compute[247421]: 2026-01-26 18:23:46.651 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:46.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:23:47.414 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:23:47 np0005596060 nova_compute[247421]: 2026-01-26 18:23:47.415 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:23:47.415 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:23:48 np0005596060 nova_compute[247421]: 2026-01-26 18:23:48.122 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 26 13:23:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:23:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:48.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:23:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:48.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:23:49.417 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:23:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 26 13:23:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:50.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:50.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:51 np0005596060 nova_compute[247421]: 2026-01-26 18:23:51.654 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Jan 26 13:23:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:23:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:52.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:23:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:52.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:53 np0005596060 nova_compute[247421]: 2026-01-26 18:23:53.124 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.9 MiB/s wr, 22 op/s
Jan 26 13:23:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:23:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:23:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:54.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:23:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:54.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 19 op/s
Jan 26 13:23:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:56.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:56 np0005596060 nova_compute[247421]: 2026-01-26 18:23:56.655 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:56.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:58 np0005596060 nova_compute[247421]: 2026-01-26 18:23:58.126 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:23:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 21 op/s
Jan 26 13:23:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:23:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:23:58.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:23:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:23:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:23:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:23:58.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:23:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 2.0 KiB/s wr, 5 op/s
Jan 26 13:24:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:24:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:00.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:24:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:00.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:01 np0005596060 nova_compute[247421]: 2026-01-26 18:24:01.658 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:01 np0005596060 podman[277094]: 2026-01-26 18:24:01.816460692 +0000 UTC m=+0.074863086 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 13:24:01 np0005596060 podman[277095]: 2026-01-26 18:24:01.883306812 +0000 UTC m=+0.141565642 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:24:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 2.0 KiB/s wr, 12 op/s
Jan 26 13:24:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:02.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:02.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:03 np0005596060 nova_compute[247421]: 2026-01-26 18:24:03.127 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009918298087660525 of space, bias 1.0, pg target 0.29754894262981574 quantized to 32 (current 32)
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:24:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:24:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 852 B/s wr, 9 op/s
Jan 26 13:24:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:04.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:04 np0005596060 nova_compute[247421]: 2026-01-26 18:24:04.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:24:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:04.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:05 np0005596060 nova_compute[247421]: 2026-01-26 18:24:05.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:24:05 np0005596060 nova_compute[247421]: 2026-01-26 18:24:05.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:24:05 np0005596060 nova_compute[247421]: 2026-01-26 18:24:05.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:24:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 682 B/s wr, 8 op/s
Jan 26 13:24:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:06.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:06 np0005596060 nova_compute[247421]: 2026-01-26 18:24:06.660 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:06.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:08 np0005596060 nova_compute[247421]: 2026-01-26 18:24:08.128 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 8 op/s
Jan 26 13:24:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:08.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:08 np0005596060 nova_compute[247421]: 2026-01-26 18:24:08.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:24:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:08.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:10 np0005596060 nova_compute[247421]: 2026-01-26 18:24:10.219 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:24:10 np0005596060 nova_compute[247421]: 2026-01-26 18:24:10.220 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:24:10 np0005596060 nova_compute[247421]: 2026-01-26 18:24:10.220 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:24:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Jan 26 13:24:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:10.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:24:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:10.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:24:11 np0005596060 nova_compute[247421]: 2026-01-26 18:24:11.051 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:24:11 np0005596060 nova_compute[247421]: 2026-01-26 18:24:11.051 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:24:11 np0005596060 nova_compute[247421]: 2026-01-26 18:24:11.051 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:24:11 np0005596060 nova_compute[247421]: 2026-01-26 18:24:11.663 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 6 op/s
Jan 26 13:24:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:12 np0005596060 nova_compute[247421]: 2026-01-26 18:24:12.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:24:12 np0005596060 nova_compute[247421]: 2026-01-26 18:24:12.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:24:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:12.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:13 np0005596060 nova_compute[247421]: 2026-01-26 18:24:13.129 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:24:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:24:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:24:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:24:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:24:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:24:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 0 op/s
Jan 26 13:24:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:14 np0005596060 nova_compute[247421]: 2026-01-26 18:24:14.532 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:24:14 np0005596060 nova_compute[247421]: 2026-01-26 18:24:14.532 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:24:14 np0005596060 nova_compute[247421]: 2026-01-26 18:24:14.533 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:24:14 np0005596060 nova_compute[247421]: 2026-01-26 18:24:14.533 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:24:14 np0005596060 nova_compute[247421]: 2026-01-26 18:24:14.533 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:24:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:24:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:14.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:24:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:14.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:24:14.753 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:24:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:24:14.754 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:24:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:24:14.754 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:24:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:24:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661168748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:24:14 np0005596060 nova_compute[247421]: 2026-01-26 18:24:14.969 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.137 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.138 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4808MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.138 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.138 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.273 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.273 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.384 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:24:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:24:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2714798396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.889 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.894 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.914 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.916 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:24:15 np0005596060 nova_compute[247421]: 2026-01-26 18:24:15.916 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:24:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 379 KiB/s rd, 0 op/s
Jan 26 13:24:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:24:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:16.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:24:16 np0005596060 nova_compute[247421]: 2026-01-26 18:24:16.663 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:16.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:18 np0005596060 nova_compute[247421]: 2026-01-26 18:24:18.182 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 341 B/s wr, 6 op/s
Jan 26 13:24:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:18.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:24:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:18.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:24:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 108 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 341 B/s wr, 6 op/s
Jan 26 13:24:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:20.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:24:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:20.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:24:21 np0005596060 nova_compute[247421]: 2026-01-26 18:24:21.664 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:21 np0005596060 nova_compute[247421]: 2026-01-26 18:24:21.916 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:24:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 142 MiB data, 322 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 36 op/s
Jan 26 13:24:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:22.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:22.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:23 np0005596060 nova_compute[247421]: 2026-01-26 18:24:23.184 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 154 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 26 13:24:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:24:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:24.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:24:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:24:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:24.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:24:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 154 MiB data, 313 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 49 op/s
Jan 26 13:24:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:24:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:26.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:24:26 np0005596060 nova_compute[247421]: 2026-01-26 18:24:26.668 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:26.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:28 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:24:28.019 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:24:28 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:24:28.020 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:24:28 np0005596060 nova_compute[247421]: 2026-01-26 18:24:28.020 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:28 np0005596060 nova_compute[247421]: 2026-01-26 18:24:28.185 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 189 MiB data, 326 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.9 MiB/s wr, 90 op/s
Jan 26 13:24:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:28.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:24:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:28.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:24:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 189 MiB data, 326 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 83 op/s
Jan 26 13:24:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:30.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:30.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:31 np0005596060 nova_compute[247421]: 2026-01-26 18:24:31.670 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 240 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.2 MiB/s wr, 109 op/s
Jan 26 13:24:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:32.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:32.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:32 np0005596060 podman[277248]: 2026-01-26 18:24:32.831268879 +0000 UTC m=+0.080621532 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:24:32 np0005596060 podman[277249]: 2026-01-26 18:24:32.910517675 +0000 UTC m=+0.155522698 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Jan 26 13:24:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:24:33.022 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:24:33 np0005596060 nova_compute[247421]: 2026-01-26 18:24:33.187 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 247 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.1 MiB/s wr, 80 op/s
Jan 26 13:24:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:24:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:34.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:24:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:34.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 247 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 3.5 MiB/s wr, 67 op/s
Jan 26 13:24:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:36.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:36 np0005596060 nova_compute[247421]: 2026-01-26 18:24:36.673 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:36.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:38 np0005596060 nova_compute[247421]: 2026-01-26 18:24:38.190 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 247 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 3.5 MiB/s wr, 67 op/s
Jan 26 13:24:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:24:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:24:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:38.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:24:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 26 13:24:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 13:24:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:24:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:24:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 26 13:24:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 13:24:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:24:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev ef82ee65-3b0d-4180-99b3-ac3c83a10685 does not exist
Jan 26 13:24:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8cad3f79-a165-4a92-9970-e24064fcea41 does not exist
Jan 26 13:24:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev fda42aaa-832f-4dfc-be05-3150b38823e2 does not exist
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:24:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:24:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 247 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.4 MiB/s wr, 26 op/s
Jan 26 13:24:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:24:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:40.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:24:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:40.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:41 np0005596060 podman[277614]: 2026-01-26 18:24:41.002078769 +0000 UTC m=+0.051224124 container create 20ea8b0de83a23aaf5699c198f61cd445c6cec26c23626fbd48d037c90a72c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bartik, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:24:41 np0005596060 systemd[1]: Started libpod-conmon-20ea8b0de83a23aaf5699c198f61cd445c6cec26c23626fbd48d037c90a72c46.scope.
Jan 26 13:24:41 np0005596060 podman[277614]: 2026-01-26 18:24:40.980446659 +0000 UTC m=+0.029592044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:24:41 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:24:41 np0005596060 podman[277614]: 2026-01-26 18:24:41.104134295 +0000 UTC m=+0.153279670 container init 20ea8b0de83a23aaf5699c198f61cd445c6cec26c23626fbd48d037c90a72c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bartik, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:24:41 np0005596060 podman[277614]: 2026-01-26 18:24:41.113883893 +0000 UTC m=+0.163029248 container start 20ea8b0de83a23aaf5699c198f61cd445c6cec26c23626fbd48d037c90a72c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:24:41 np0005596060 podman[277614]: 2026-01-26 18:24:41.117075245 +0000 UTC m=+0.166220600 container attach 20ea8b0de83a23aaf5699c198f61cd445c6cec26c23626fbd48d037c90a72c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 13:24:41 np0005596060 friendly_bartik[277630]: 167 167
Jan 26 13:24:41 np0005596060 systemd[1]: libpod-20ea8b0de83a23aaf5699c198f61cd445c6cec26c23626fbd48d037c90a72c46.scope: Deactivated successfully.
Jan 26 13:24:41 np0005596060 podman[277614]: 2026-01-26 18:24:41.123037636 +0000 UTC m=+0.172183001 container died 20ea8b0de83a23aaf5699c198f61cd445c6cec26c23626fbd48d037c90a72c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bartik, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 13:24:41 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a1d5f97153228ed81fbcfae081b9e926fcebfd5b44da5fee6fc9ff5d23c38abd-merged.mount: Deactivated successfully.
Jan 26 13:24:41 np0005596060 podman[277614]: 2026-01-26 18:24:41.171293744 +0000 UTC m=+0.220439099 container remove 20ea8b0de83a23aaf5699c198f61cd445c6cec26c23626fbd48d037c90a72c46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 13:24:41 np0005596060 systemd[1]: libpod-conmon-20ea8b0de83a23aaf5699c198f61cd445c6cec26c23626fbd48d037c90a72c46.scope: Deactivated successfully.
Jan 26 13:24:41 np0005596060 podman[277654]: 2026-01-26 18:24:41.320713385 +0000 UTC m=+0.027193193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:24:41 np0005596060 podman[277654]: 2026-01-26 18:24:41.499081033 +0000 UTC m=+0.205560731 container create 1dc71afec0409170506745970601d203279ec74488d45e17b30849171f7a3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 26 13:24:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:24:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:24:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:24:41 np0005596060 systemd[1]: Started libpod-conmon-1dc71afec0409170506745970601d203279ec74488d45e17b30849171f7a3446.scope.
Jan 26 13:24:41 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:24:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b35dab4fd81de7011cfa98dc6885e6594d55aab8f65b8d16d04c46a599b7e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b35dab4fd81de7011cfa98dc6885e6594d55aab8f65b8d16d04c46a599b7e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b35dab4fd81de7011cfa98dc6885e6594d55aab8f65b8d16d04c46a599b7e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b35dab4fd81de7011cfa98dc6885e6594d55aab8f65b8d16d04c46a599b7e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b35dab4fd81de7011cfa98dc6885e6594d55aab8f65b8d16d04c46a599b7e3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:41 np0005596060 nova_compute[247421]: 2026-01-26 18:24:41.679 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:41 np0005596060 podman[277654]: 2026-01-26 18:24:41.696482375 +0000 UTC m=+0.402962143 container init 1dc71afec0409170506745970601d203279ec74488d45e17b30849171f7a3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:24:41 np0005596060 podman[277654]: 2026-01-26 18:24:41.707419333 +0000 UTC m=+0.413899051 container start 1dc71afec0409170506745970601d203279ec74488d45e17b30849171f7a3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:24:41 np0005596060 podman[277654]: 2026-01-26 18:24:41.71400352 +0000 UTC m=+0.420483318 container attach 1dc71afec0409170506745970601d203279ec74488d45e17b30849171f7a3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:24:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 247 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 2.4 MiB/s wr, 26 op/s
Jan 26 13:24:42 np0005596060 musing_feistel[277670]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:24:42 np0005596060 musing_feistel[277670]: --> relative data size: 1.0
Jan 26 13:24:42 np0005596060 musing_feistel[277670]: --> All data devices are unavailable
Jan 26 13:24:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:24:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:42.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:24:42 np0005596060 systemd[1]: libpod-1dc71afec0409170506745970601d203279ec74488d45e17b30849171f7a3446.scope: Deactivated successfully.
Jan 26 13:24:42 np0005596060 podman[277654]: 2026-01-26 18:24:42.623530408 +0000 UTC m=+1.330010116 container died 1dc71afec0409170506745970601d203279ec74488d45e17b30849171f7a3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 13:24:42 np0005596060 systemd[1]: var-lib-containers-storage-overlay-24b35dab4fd81de7011cfa98dc6885e6594d55aab8f65b8d16d04c46a599b7e3-merged.mount: Deactivated successfully.
Jan 26 13:24:42 np0005596060 podman[277654]: 2026-01-26 18:24:42.689924587 +0000 UTC m=+1.396404295 container remove 1dc71afec0409170506745970601d203279ec74488d45e17b30849171f7a3446 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:24:42 np0005596060 systemd[1]: libpod-conmon-1dc71afec0409170506745970601d203279ec74488d45e17b30849171f7a3446.scope: Deactivated successfully.
Jan 26 13:24:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:24:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:42.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:24:43 np0005596060 nova_compute[247421]: 2026-01-26 18:24:43.191 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:43 np0005596060 podman[277838]: 2026-01-26 18:24:43.471455369 +0000 UTC m=+0.065250171 container create 6f6f42825581b40d1dff62b4185b6b71b89e2e3b229102f114a37e4bd73a0e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 13:24:43 np0005596060 systemd[1]: Started libpod-conmon-6f6f42825581b40d1dff62b4185b6b71b89e2e3b229102f114a37e4bd73a0e6d.scope.
Jan 26 13:24:43 np0005596060 podman[277838]: 2026-01-26 18:24:43.445734765 +0000 UTC m=+0.039529637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:24:43 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:24:43 np0005596060 podman[277838]: 2026-01-26 18:24:43.567729688 +0000 UTC m=+0.161524530 container init 6f6f42825581b40d1dff62b4185b6b71b89e2e3b229102f114a37e4bd73a0e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 13:24:43 np0005596060 podman[277838]: 2026-01-26 18:24:43.579636011 +0000 UTC m=+0.173430803 container start 6f6f42825581b40d1dff62b4185b6b71b89e2e3b229102f114a37e4bd73a0e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:24:43 np0005596060 podman[277838]: 2026-01-26 18:24:43.584028013 +0000 UTC m=+0.177822845 container attach 6f6f42825581b40d1dff62b4185b6b71b89e2e3b229102f114a37e4bd73a0e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:24:43 np0005596060 confident_snyder[277854]: 167 167
Jan 26 13:24:43 np0005596060 systemd[1]: libpod-6f6f42825581b40d1dff62b4185b6b71b89e2e3b229102f114a37e4bd73a0e6d.scope: Deactivated successfully.
Jan 26 13:24:43 np0005596060 podman[277838]: 2026-01-26 18:24:43.58824308 +0000 UTC m=+0.182037932 container died 6f6f42825581b40d1dff62b4185b6b71b89e2e3b229102f114a37e4bd73a0e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 26 13:24:43 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4f74ab0c2fbecadec49980cb1e7ee02e53d9caa6cae7d97068940b0bb69db787-merged.mount: Deactivated successfully.
Jan 26 13:24:43 np0005596060 podman[277838]: 2026-01-26 18:24:43.637618576 +0000 UTC m=+0.231413368 container remove 6f6f42825581b40d1dff62b4185b6b71b89e2e3b229102f114a37e4bd73a0e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:24:43 np0005596060 systemd[1]: libpod-conmon-6f6f42825581b40d1dff62b4185b6b71b89e2e3b229102f114a37e4bd73a0e6d.scope: Deactivated successfully.
Jan 26 13:24:43 np0005596060 podman[277878]: 2026-01-26 18:24:43.837229184 +0000 UTC m=+0.047932530 container create e0d7438bc13839ebdb79326a4bfd71623065b4f32b5c1e38c01335afbbc4db7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 13:24:43 np0005596060 systemd[1]: Started libpod-conmon-e0d7438bc13839ebdb79326a4bfd71623065b4f32b5c1e38c01335afbbc4db7c.scope.
Jan 26 13:24:43 np0005596060 podman[277878]: 2026-01-26 18:24:43.816512597 +0000 UTC m=+0.027215993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:24:43 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:24:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5abe6762d0dad8e8009678b4a650d9efe17e98de3104c3f186d6cf19dbfc7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5abe6762d0dad8e8009678b4a650d9efe17e98de3104c3f186d6cf19dbfc7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5abe6762d0dad8e8009678b4a650d9efe17e98de3104c3f186d6cf19dbfc7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5abe6762d0dad8e8009678b4a650d9efe17e98de3104c3f186d6cf19dbfc7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:43 np0005596060 podman[277878]: 2026-01-26 18:24:43.939412344 +0000 UTC m=+0.150115720 container init e0d7438bc13839ebdb79326a4bfd71623065b4f32b5c1e38c01335afbbc4db7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 13:24:43 np0005596060 podman[277878]: 2026-01-26 18:24:43.94632994 +0000 UTC m=+0.157033286 container start e0d7438bc13839ebdb79326a4bfd71623065b4f32b5c1e38c01335afbbc4db7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_elion, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:24:43 np0005596060 podman[277878]: 2026-01-26 18:24:43.951293026 +0000 UTC m=+0.161996402 container attach e0d7438bc13839ebdb79326a4bfd71623065b4f32b5c1e38c01335afbbc4db7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_elion, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:24:44
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'vms', 'backups', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'images']
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 247 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 114 KiB/s wr, 0 op/s
Jan 26 13:24:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:44.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:44.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:44 np0005596060 serene_elion[277894]: {
Jan 26 13:24:44 np0005596060 serene_elion[277894]:    "1": [
Jan 26 13:24:44 np0005596060 serene_elion[277894]:        {
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "devices": [
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "/dev/loop3"
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            ],
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "lv_name": "ceph_lv0",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "lv_size": "7511998464",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "name": "ceph_lv0",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "tags": {
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.cluster_name": "ceph",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.crush_device_class": "",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.encrypted": "0",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.osd_id": "1",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.type": "block",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:                "ceph.vdo": "0"
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            },
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "type": "block",
Jan 26 13:24:44 np0005596060 serene_elion[277894]:            "vg_name": "ceph_vg0"
Jan 26 13:24:44 np0005596060 serene_elion[277894]:        }
Jan 26 13:24:44 np0005596060 serene_elion[277894]:    ]
Jan 26 13:24:44 np0005596060 serene_elion[277894]: }
Jan 26 13:24:44 np0005596060 systemd[1]: libpod-e0d7438bc13839ebdb79326a4bfd71623065b4f32b5c1e38c01335afbbc4db7c.scope: Deactivated successfully.
Jan 26 13:24:44 np0005596060 podman[277878]: 2026-01-26 18:24:44.842861586 +0000 UTC m=+1.053564962 container died e0d7438bc13839ebdb79326a4bfd71623065b4f32b5c1e38c01335afbbc4db7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_elion, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:24:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:24:44 np0005596060 systemd[1]: var-lib-containers-storage-overlay-cc5abe6762d0dad8e8009678b4a650d9efe17e98de3104c3f186d6cf19dbfc7a-merged.mount: Deactivated successfully.
Jan 26 13:24:44 np0005596060 podman[277878]: 2026-01-26 18:24:44.910762374 +0000 UTC m=+1.121465720 container remove e0d7438bc13839ebdb79326a4bfd71623065b4f32b5c1e38c01335afbbc4db7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_elion, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:24:44 np0005596060 systemd[1]: libpod-conmon-e0d7438bc13839ebdb79326a4bfd71623065b4f32b5c1e38c01335afbbc4db7c.scope: Deactivated successfully.
Jan 26 13:24:45 np0005596060 podman[278057]: 2026-01-26 18:24:45.591751308 +0000 UTC m=+0.029226555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:24:45 np0005596060 podman[278057]: 2026-01-26 18:24:45.921080896 +0000 UTC m=+0.358556133 container create 3a3d0905edb2e4edeae267a6d5a918abd1a3099c3e15b336c1f83bd527a8b9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 13:24:46 np0005596060 systemd[1]: Started libpod-conmon-3a3d0905edb2e4edeae267a6d5a918abd1a3099c3e15b336c1f83bd527a8b9d6.scope.
Jan 26 13:24:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:24:46 np0005596060 podman[278057]: 2026-01-26 18:24:46.3402964 +0000 UTC m=+0.777771647 container init 3a3d0905edb2e4edeae267a6d5a918abd1a3099c3e15b336c1f83bd527a8b9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:24:46 np0005596060 podman[278057]: 2026-01-26 18:24:46.352596293 +0000 UTC m=+0.790071560 container start 3a3d0905edb2e4edeae267a6d5a918abd1a3099c3e15b336c1f83bd527a8b9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:24:46 np0005596060 angry_benz[278073]: 167 167
Jan 26 13:24:46 np0005596060 systemd[1]: libpod-3a3d0905edb2e4edeae267a6d5a918abd1a3099c3e15b336c1f83bd527a8b9d6.scope: Deactivated successfully.
Jan 26 13:24:46 np0005596060 conmon[278073]: conmon 3a3d0905edb2e4edeae2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a3d0905edb2e4edeae267a6d5a918abd1a3099c3e15b336c1f83bd527a8b9d6.scope/container/memory.events
Jan 26 13:24:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 247 MiB data, 356 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:24:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:46 np0005596060 podman[278057]: 2026-01-26 18:24:46.675147669 +0000 UTC m=+1.112622996 container attach 3a3d0905edb2e4edeae267a6d5a918abd1a3099c3e15b336c1f83bd527a8b9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 26 13:24:46 np0005596060 podman[278057]: 2026-01-26 18:24:46.675774205 +0000 UTC m=+1.113249482 container died 3a3d0905edb2e4edeae267a6d5a918abd1a3099c3e15b336c1f83bd527a8b9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:24:46 np0005596060 nova_compute[247421]: 2026-01-26 18:24:46.683 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:46.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0b59e03a76717aed78dc775530eec5c45ab235e5a264a0ae8a889e99f20fc30b-merged.mount: Deactivated successfully.
Jan 26 13:24:47 np0005596060 podman[278057]: 2026-01-26 18:24:47.106726498 +0000 UTC m=+1.544201745 container remove 3a3d0905edb2e4edeae267a6d5a918abd1a3099c3e15b336c1f83bd527a8b9d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_benz, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:24:47 np0005596060 systemd[1]: libpod-conmon-3a3d0905edb2e4edeae267a6d5a918abd1a3099c3e15b336c1f83bd527a8b9d6.scope: Deactivated successfully.
Jan 26 13:24:47 np0005596060 podman[278099]: 2026-01-26 18:24:47.303805382 +0000 UTC m=+0.063121957 container create e29c1b76eef6fc9440514a7428e9598e98d09474b0b306f1a282dfe77026356c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 13:24:47 np0005596060 systemd[1]: Started libpod-conmon-e29c1b76eef6fc9440514a7428e9598e98d09474b0b306f1a282dfe77026356c.scope.
Jan 26 13:24:47 np0005596060 podman[278099]: 2026-01-26 18:24:47.264872381 +0000 UTC m=+0.024188976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:24:47 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:24:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383dd18e70bbc890e02c3f1ea86ff0571ba499c39cd65e1671245b353dc1b2dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383dd18e70bbc890e02c3f1ea86ff0571ba499c39cd65e1671245b353dc1b2dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383dd18e70bbc890e02c3f1ea86ff0571ba499c39cd65e1671245b353dc1b2dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/383dd18e70bbc890e02c3f1ea86ff0571ba499c39cd65e1671245b353dc1b2dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:24:47 np0005596060 podman[278099]: 2026-01-26 18:24:47.405604692 +0000 UTC m=+0.164921287 container init e29c1b76eef6fc9440514a7428e9598e98d09474b0b306f1a282dfe77026356c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:24:47 np0005596060 podman[278099]: 2026-01-26 18:24:47.41535673 +0000 UTC m=+0.174673305 container start e29c1b76eef6fc9440514a7428e9598e98d09474b0b306f1a282dfe77026356c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_noyce, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:24:47 np0005596060 podman[278099]: 2026-01-26 18:24:47.418810167 +0000 UTC m=+0.178126752 container attach e29c1b76eef6fc9440514a7428e9598e98d09474b0b306f1a282dfe77026356c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_noyce, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:24:48 np0005596060 nova_compute[247421]: 2026-01-26 18:24:48.193 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:48 np0005596060 boring_noyce[278116]: {
Jan 26 13:24:48 np0005596060 boring_noyce[278116]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:24:48 np0005596060 boring_noyce[278116]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:24:48 np0005596060 boring_noyce[278116]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:24:48 np0005596060 boring_noyce[278116]:        "osd_id": 1,
Jan 26 13:24:48 np0005596060 boring_noyce[278116]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:24:48 np0005596060 boring_noyce[278116]:        "type": "bluestore"
Jan 26 13:24:48 np0005596060 boring_noyce[278116]:    }
Jan 26 13:24:48 np0005596060 boring_noyce[278116]: }
Jan 26 13:24:48 np0005596060 systemd[1]: libpod-e29c1b76eef6fc9440514a7428e9598e98d09474b0b306f1a282dfe77026356c.scope: Deactivated successfully.
Jan 26 13:24:48 np0005596060 podman[278099]: 2026-01-26 18:24:48.264312476 +0000 UTC m=+1.023629061 container died e29c1b76eef6fc9440514a7428e9598e98d09474b0b306f1a282dfe77026356c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 13:24:48 np0005596060 systemd[1]: var-lib-containers-storage-overlay-383dd18e70bbc890e02c3f1ea86ff0571ba499c39cd65e1671245b353dc1b2dc-merged.mount: Deactivated successfully.
Jan 26 13:24:48 np0005596060 podman[278099]: 2026-01-26 18:24:48.326472017 +0000 UTC m=+1.085788592 container remove e29c1b76eef6fc9440514a7428e9598e98d09474b0b306f1a282dfe77026356c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 13:24:48 np0005596060 systemd[1]: libpod-conmon-e29c1b76eef6fc9440514a7428e9598e98d09474b0b306f1a282dfe77026356c.scope: Deactivated successfully.
Jan 26 13:24:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:24:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:24:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:24:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:24:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 1ec985fe-d758-4f4f-bef9-c6ba53dc8f75 does not exist
Jan 26 13:24:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 646a71f4-7966-45fe-98b7-36a69efe166f does not exist
Jan 26 13:24:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 087d5562-4473-4cce-b62a-645f46a2e83d does not exist
Jan 26 13:24:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 247 MiB data, 356 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:24:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:48.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:48.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:24:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:24:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 247 MiB data, 356 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:24:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:24:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:50.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:24:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:50.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:51 np0005596060 nova_compute[247421]: 2026-01-26 18:24:51.686 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:24:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1436815638' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:24:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:24:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1436815638' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:24:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 237 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 511 B/s wr, 28 op/s
Jan 26 13:24:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:52.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:52.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:53 np0005596060 nova_compute[247421]: 2026-01-26 18:24:53.196 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 203 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 852 B/s wr, 44 op/s
Jan 26 13:24:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:24:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:54.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:54.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 203 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 853 B/s wr, 44 op/s
Jan 26 13:24:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:56.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:56 np0005596060 nova_compute[247421]: 2026-01-26 18:24:56.688 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:24:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:56.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:24:58 np0005596060 nova_compute[247421]: 2026-01-26 18:24:58.198 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:24:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 2.8 KiB/s wr, 90 op/s
Jan 26 13:24:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:24:58.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:24:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:24:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:24:58.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:24:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:24:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1134080759' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:24:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:24:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1134080759' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:24:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 2.8 KiB/s wr, 90 op/s
Jan 26 13:25:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:25:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:00.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:25:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:00.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:01 np0005596060 nova_compute[247421]: 2026-01-26 18:25:01.690 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 83 KiB/s rd, 3.3 KiB/s wr, 115 op/s
Jan 26 13:25:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:02.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:02.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:03 np0005596060 nova_compute[247421]: 2026-01-26 18:25:03.240 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:03 np0005596060 podman[278259]: 2026-01-26 18:25:03.804305886 +0000 UTC m=+0.059450494 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:25:03 np0005596060 podman[278260]: 2026-01-26 18:25:03.844812236 +0000 UTC m=+0.100088337 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-06 of space, bias 1.0, pg target 0.0005452610273590173 quantized to 32 (current 32)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:25:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:25:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 69 KiB/s rd, 2.8 KiB/s wr, 97 op/s
Jan 26 13:25:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:04.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:04.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:25:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/603132295' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:25:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:25:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/603132295' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:25:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 59 KiB/s rd, 2.5 KiB/s wr, 82 op/s
Jan 26 13:25:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:06.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:06 np0005596060 nova_compute[247421]: 2026-01-26 18:25:06.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:25:06 np0005596060 nova_compute[247421]: 2026-01-26 18:25:06.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:25:06 np0005596060 nova_compute[247421]: 2026-01-26 18:25:06.691 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:06.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:07 np0005596060 nova_compute[247421]: 2026-01-26 18:25:07.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:25:07 np0005596060 nova_compute[247421]: 2026-01-26 18:25:07.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:25:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:25:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2787470545' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:25:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:25:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2787470545' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:25:08 np0005596060 nova_compute[247421]: 2026-01-26 18:25:08.244 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 3.0 KiB/s wr, 110 op/s
Jan 26 13:25:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:08.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:08.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:09 np0005596060 nova_compute[247421]: 2026-01-26 18:25:09.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:25:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:25:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/565359614' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:25:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:25:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/565359614' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:25:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 1023 B/s wr, 63 op/s
Jan 26 13:25:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:10.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:10 np0005596060 nova_compute[247421]: 2026-01-26 18:25:10.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:25:10 np0005596060 nova_compute[247421]: 2026-01-26 18:25:10.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:25:10 np0005596060 nova_compute[247421]: 2026-01-26 18:25:10.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:25:10 np0005596060 nova_compute[247421]: 2026-01-26 18:25:10.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:25:10 np0005596060 nova_compute[247421]: 2026-01-26 18:25:10.673 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:25:10 np0005596060 nova_compute[247421]: 2026-01-26 18:25:10.673 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:25:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:10.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:11 np0005596060 nova_compute[247421]: 2026-01-26 18:25:11.693 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 26 13:25:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 26 13:25:12 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 26 13:25:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 57 KiB/s rd, 1.9 KiB/s wr, 76 op/s
Jan 26 13:25:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:25:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:12.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:25:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:12.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:25:13.085 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:25:13 np0005596060 nova_compute[247421]: 2026-01-26 18:25:13.085 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:25:13.086 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:25:13 np0005596060 nova_compute[247421]: 2026-01-26 18:25:13.245 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:13 np0005596060 nova_compute[247421]: 2026-01-26 18:25:13.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:25:13 np0005596060 nova_compute[247421]: 2026-01-26 18:25:13.680 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:25:13 np0005596060 nova_compute[247421]: 2026-01-26 18:25:13.680 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:25:13 np0005596060 nova_compute[247421]: 2026-01-26 18:25:13.681 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:25:13 np0005596060 nova_compute[247421]: 2026-01-26 18:25:13.681 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:25:13 np0005596060 nova_compute[247421]: 2026-01-26 18:25:13.681 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:25:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:25:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:25:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:25:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:25:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:25:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:25:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:25:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534874135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.140 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.327 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.328 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4795MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.329 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.329 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.415 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.415 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:25:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 8 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 293 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 1.9 KiB/s wr, 62 op/s
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.444 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:25:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:14.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:25:14.754 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:25:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:25:14.755 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:25:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:25:14.755 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:25:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:14.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:25:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159432364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.879 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.884 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.915 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.917 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:25:14 np0005596060 nova_compute[247421]: 2026-01-26 18:25:14.917 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:25:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 8 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 293 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 46 KiB/s rd, 1.9 KiB/s wr, 62 op/s
Jan 26 13:25:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:16.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:16 np0005596060 nova_compute[247421]: 2026-01-26 18:25:16.695 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:25:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:16.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:25:18 np0005596060 nova_compute[247421]: 2026-01-26 18:25:18.247 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Jan 26 13:25:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:18.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:18.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 26 13:25:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 26 13:25:19 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 26 13:25:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 35 op/s
Jan 26 13:25:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:20.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:20.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:21 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:25:21.088 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:25:21 np0005596060 nova_compute[247421]: 2026-01-26 18:25:21.701 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:21 np0005596060 nova_compute[247421]: 2026-01-26 18:25:21.917 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:25:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 716 B/s wr, 16 op/s
Jan 26 13:25:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:22.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:22.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:23 np0005596060 nova_compute[247421]: 2026-01-26 18:25:23.256 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 716 B/s wr, 16 op/s
Jan 26 13:25:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:24.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:24.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 716 B/s wr, 16 op/s
Jan 26 13:25:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:25:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:26.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:25:26 np0005596060 nova_compute[247421]: 2026-01-26 18:25:26.702 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:26.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:28 np0005596060 nova_compute[247421]: 2026-01-26 18:25:28.258 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:28.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:28.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:25:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:30.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:25:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:30.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:31 np0005596060 nova_compute[247421]: 2026-01-26 18:25:31.706 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:25:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:32.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:25:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:32.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:33 np0005596060 nova_compute[247421]: 2026-01-26 18:25:33.260 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:34.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:34 np0005596060 podman[278415]: 2026-01-26 18:25:34.78685968 +0000 UTC m=+0.050400373 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:25:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:34.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:34 np0005596060 podman[278416]: 2026-01-26 18:25:34.889123652 +0000 UTC m=+0.152361407 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:25:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.523430) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451936523533, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1597, "num_deletes": 252, "total_data_size": 2747716, "memory_usage": 2783640, "flush_reason": "Manual Compaction"}
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451936548484, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2693770, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32440, "largest_seqno": 34036, "table_properties": {"data_size": 2686325, "index_size": 4388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15918, "raw_average_key_size": 20, "raw_value_size": 2671345, "raw_average_value_size": 3433, "num_data_blocks": 191, "num_entries": 778, "num_filter_entries": 778, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769451790, "oldest_key_time": 1769451790, "file_creation_time": 1769451936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 25080 microseconds, and 11806 cpu microseconds.
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.548523) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2693770 bytes OK
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.548542) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.550086) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.550109) EVENT_LOG_v1 {"time_micros": 1769451936550095, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.550125) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2740941, prev total WAL file size 2741578, number of live WAL files 2.
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.550975) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2630KB)], [71(8157KB)]
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451936551050, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11046551, "oldest_snapshot_seqno": -1}
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5799 keys, 9074842 bytes, temperature: kUnknown
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451936609577, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9074842, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9036656, "index_size": 22588, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 149110, "raw_average_key_size": 25, "raw_value_size": 8932681, "raw_average_value_size": 1540, "num_data_blocks": 907, "num_entries": 5799, "num_filter_entries": 5799, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769451936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.609868) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9074842 bytes
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.615390) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 188.5 rd, 154.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.0 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(7.5) write-amplify(3.4) OK, records in: 6324, records dropped: 525 output_compression: NoCompression
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.615409) EVENT_LOG_v1 {"time_micros": 1769451936615400, "job": 40, "event": "compaction_finished", "compaction_time_micros": 58613, "compaction_time_cpu_micros": 20377, "output_level": 6, "num_output_files": 1, "total_output_size": 9074842, "num_input_records": 6324, "num_output_records": 5799, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451936616073, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769451936617917, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.550814) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.618096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.618106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.618109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.618111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:25:36 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:25:36.618113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:25:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:36.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:36 np0005596060 nova_compute[247421]: 2026-01-26 18:25:36.708 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:25:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:36.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:25:38 np0005596060 nova_compute[247421]: 2026-01-26 18:25:38.262 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:38.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:38.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:40.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:41 np0005596060 nova_compute[247421]: 2026-01-26 18:25:41.709 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:25:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:42.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:25:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:42.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:43 np0005596060 nova_compute[247421]: 2026-01-26 18:25:43.264 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:25:44
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'backups', 'images', 'volumes', 'vms', '.mgr', 'cephfs.cephfs.data']
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:44.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:25:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:44.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:25:44 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:25:44 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 13:25:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:25:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:25:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:46.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:25:46 np0005596060 nova_compute[247421]: 2026-01-26 18:25:46.711 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:46.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:48 np0005596060 nova_compute[247421]: 2026-01-26 18:25:48.265 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:25:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:48.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:25:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:48.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:25:49 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 389f5f21-342f-4e32-a132-fd26e4a06608 does not exist
Jan 26 13:25:49 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 2932ed0f-6ffc-42f4-99f0-5ffe17421667 does not exist
Jan 26 13:25:49 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9c5fb418-3bfe-4c36-b360-015474c2ea53 does not exist
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:25:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:25:50 np0005596060 podman[278786]: 2026-01-26 18:25:50.366468247 +0000 UTC m=+0.048533115 container create a7e08dbf1b1a9cf473f4a1149e2dcb79af763c5f52bfc5989edaea68d1e27219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Jan 26 13:25:50 np0005596060 systemd[1]: Started libpod-conmon-a7e08dbf1b1a9cf473f4a1149e2dcb79af763c5f52bfc5989edaea68d1e27219.scope.
Jan 26 13:25:50 np0005596060 podman[278786]: 2026-01-26 18:25:50.343331179 +0000 UTC m=+0.025396057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:25:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:50 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:25:50 np0005596060 podman[278786]: 2026-01-26 18:25:50.470541225 +0000 UTC m=+0.152606133 container init a7e08dbf1b1a9cf473f4a1149e2dcb79af763c5f52bfc5989edaea68d1e27219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_euclid, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:25:50 np0005596060 podman[278786]: 2026-01-26 18:25:50.479983595 +0000 UTC m=+0.162048443 container start a7e08dbf1b1a9cf473f4a1149e2dcb79af763c5f52bfc5989edaea68d1e27219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:25:50 np0005596060 podman[278786]: 2026-01-26 18:25:50.48409515 +0000 UTC m=+0.166160048 container attach a7e08dbf1b1a9cf473f4a1149e2dcb79af763c5f52bfc5989edaea68d1e27219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 13:25:50 np0005596060 cranky_euclid[278802]: 167 167
Jan 26 13:25:50 np0005596060 systemd[1]: libpod-a7e08dbf1b1a9cf473f4a1149e2dcb79af763c5f52bfc5989edaea68d1e27219.scope: Deactivated successfully.
Jan 26 13:25:50 np0005596060 conmon[278802]: conmon a7e08dbf1b1a9cf473f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7e08dbf1b1a9cf473f4a1149e2dcb79af763c5f52bfc5989edaea68d1e27219.scope/container/memory.events
Jan 26 13:25:50 np0005596060 podman[278786]: 2026-01-26 18:25:50.49237203 +0000 UTC m=+0.174436878 container died a7e08dbf1b1a9cf473f4a1149e2dcb79af763c5f52bfc5989edaea68d1e27219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_euclid, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:25:50 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d99f8ef80619c14166d3483b0ee87ba3e1c843eea289c16d63b2d38ee61b6b06-merged.mount: Deactivated successfully.
Jan 26 13:25:50 np0005596060 podman[278786]: 2026-01-26 18:25:50.536541934 +0000 UTC m=+0.218606782 container remove a7e08dbf1b1a9cf473f4a1149e2dcb79af763c5f52bfc5989edaea68d1e27219 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_euclid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:25:50 np0005596060 systemd[1]: libpod-conmon-a7e08dbf1b1a9cf473f4a1149e2dcb79af763c5f52bfc5989edaea68d1e27219.scope: Deactivated successfully.
Jan 26 13:25:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:50.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:50 np0005596060 podman[278825]: 2026-01-26 18:25:50.719441167 +0000 UTC m=+0.055373720 container create 1f1f1e86ba6cef01b1e850ae6619c341b8ce8837e6e2a3684f1b30d109fce05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:25:50 np0005596060 systemd[1]: Started libpod-conmon-1f1f1e86ba6cef01b1e850ae6619c341b8ce8837e6e2a3684f1b30d109fce05f.scope.
Jan 26 13:25:50 np0005596060 podman[278825]: 2026-01-26 18:25:50.693585239 +0000 UTC m=+0.029517802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:25:50 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:25:50 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e8a6b85b9cc8b4f51c08f7bdfaaa13d05a765750ea5cefec40de91500695df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:50 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e8a6b85b9cc8b4f51c08f7bdfaaa13d05a765750ea5cefec40de91500695df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:50 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e8a6b85b9cc8b4f51c08f7bdfaaa13d05a765750ea5cefec40de91500695df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:50 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e8a6b85b9cc8b4f51c08f7bdfaaa13d05a765750ea5cefec40de91500695df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:50 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55e8a6b85b9cc8b4f51c08f7bdfaaa13d05a765750ea5cefec40de91500695df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:50.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:50 np0005596060 podman[278825]: 2026-01-26 18:25:50.819844851 +0000 UTC m=+0.155777474 container init 1f1f1e86ba6cef01b1e850ae6619c341b8ce8837e6e2a3684f1b30d109fce05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:25:50 np0005596060 podman[278825]: 2026-01-26 18:25:50.827555047 +0000 UTC m=+0.163487630 container start 1f1f1e86ba6cef01b1e850ae6619c341b8ce8837e6e2a3684f1b30d109fce05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 13:25:50 np0005596060 podman[278825]: 2026-01-26 18:25:50.831831136 +0000 UTC m=+0.167763719 container attach 1f1f1e86ba6cef01b1e850ae6619c341b8ce8837e6e2a3684f1b30d109fce05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:25:51 np0005596060 practical_cohen[278841]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:25:51 np0005596060 practical_cohen[278841]: --> relative data size: 1.0
Jan 26 13:25:51 np0005596060 practical_cohen[278841]: --> All data devices are unavailable
Jan 26 13:25:51 np0005596060 systemd[1]: libpod-1f1f1e86ba6cef01b1e850ae6619c341b8ce8837e6e2a3684f1b30d109fce05f.scope: Deactivated successfully.
Jan 26 13:25:51 np0005596060 podman[278825]: 2026-01-26 18:25:51.664307024 +0000 UTC m=+1.000239587 container died 1f1f1e86ba6cef01b1e850ae6619c341b8ce8837e6e2a3684f1b30d109fce05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 13:25:51 np0005596060 systemd[1]: var-lib-containers-storage-overlay-55e8a6b85b9cc8b4f51c08f7bdfaaa13d05a765750ea5cefec40de91500695df-merged.mount: Deactivated successfully.
Jan 26 13:25:51 np0005596060 nova_compute[247421]: 2026-01-26 18:25:51.714 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:51 np0005596060 podman[278825]: 2026-01-26 18:25:51.737598588 +0000 UTC m=+1.073531141 container remove 1f1f1e86ba6cef01b1e850ae6619c341b8ce8837e6e2a3684f1b30d109fce05f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:25:51 np0005596060 systemd[1]: libpod-conmon-1f1f1e86ba6cef01b1e850ae6619c341b8ce8837e6e2a3684f1b30d109fce05f.scope: Deactivated successfully.
Jan 26 13:25:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:52 np0005596060 podman[279009]: 2026-01-26 18:25:52.484399656 +0000 UTC m=+0.049796418 container create 3e6ebc3e5dfbd66a46d6d33d487616696507051c6357a334a7f1e6822031aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_boyd, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 26 13:25:52 np0005596060 systemd[1]: Started libpod-conmon-3e6ebc3e5dfbd66a46d6d33d487616696507051c6357a334a7f1e6822031aa81.scope.
Jan 26 13:25:52 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:25:52 np0005596060 podman[279009]: 2026-01-26 18:25:52.4633263 +0000 UTC m=+0.028723072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:25:52 np0005596060 podman[279009]: 2026-01-26 18:25:52.560984824 +0000 UTC m=+0.126381596 container init 3e6ebc3e5dfbd66a46d6d33d487616696507051c6357a334a7f1e6822031aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_boyd, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 13:25:52 np0005596060 podman[279009]: 2026-01-26 18:25:52.568421343 +0000 UTC m=+0.133818095 container start 3e6ebc3e5dfbd66a46d6d33d487616696507051c6357a334a7f1e6822031aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:25:52 np0005596060 podman[279009]: 2026-01-26 18:25:52.571965863 +0000 UTC m=+0.137362735 container attach 3e6ebc3e5dfbd66a46d6d33d487616696507051c6357a334a7f1e6822031aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_boyd, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 13:25:52 np0005596060 sad_boyd[279025]: 167 167
Jan 26 13:25:52 np0005596060 systemd[1]: libpod-3e6ebc3e5dfbd66a46d6d33d487616696507051c6357a334a7f1e6822031aa81.scope: Deactivated successfully.
Jan 26 13:25:52 np0005596060 podman[279009]: 2026-01-26 18:25:52.574020896 +0000 UTC m=+0.139417648 container died 3e6ebc3e5dfbd66a46d6d33d487616696507051c6357a334a7f1e6822031aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_boyd, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 26 13:25:52 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e9aef0c055f971ceede8afd1e4a6cdf7b06c93b5d5ea65f3fbf304301a33117c-merged.mount: Deactivated successfully.
Jan 26 13:25:52 np0005596060 podman[279009]: 2026-01-26 18:25:52.607743183 +0000 UTC m=+0.173139935 container remove 3e6ebc3e5dfbd66a46d6d33d487616696507051c6357a334a7f1e6822031aa81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_boyd, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 13:25:52 np0005596060 systemd[1]: libpod-conmon-3e6ebc3e5dfbd66a46d6d33d487616696507051c6357a334a7f1e6822031aa81.scope: Deactivated successfully.
Jan 26 13:25:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:52.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:52 np0005596060 podman[279049]: 2026-01-26 18:25:52.78526984 +0000 UTC m=+0.044426342 container create 2e76f6faaf5dd2e8eefd25cc9d32b983882920478bcada034a1c691e9a705bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:25:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:52.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:52 np0005596060 systemd[1]: Started libpod-conmon-2e76f6faaf5dd2e8eefd25cc9d32b983882920478bcada034a1c691e9a705bed.scope.
Jan 26 13:25:52 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:25:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc85649db7d9a5d3f60343b3c60b71056cac5b5c2e5b2f66cf8b31b98db3db84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc85649db7d9a5d3f60343b3c60b71056cac5b5c2e5b2f66cf8b31b98db3db84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc85649db7d9a5d3f60343b3c60b71056cac5b5c2e5b2f66cf8b31b98db3db84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc85649db7d9a5d3f60343b3c60b71056cac5b5c2e5b2f66cf8b31b98db3db84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:52 np0005596060 podman[279049]: 2026-01-26 18:25:52.767265752 +0000 UTC m=+0.026422264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:25:52 np0005596060 podman[279049]: 2026-01-26 18:25:52.872036967 +0000 UTC m=+0.131193519 container init 2e76f6faaf5dd2e8eefd25cc9d32b983882920478bcada034a1c691e9a705bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 13:25:52 np0005596060 podman[279049]: 2026-01-26 18:25:52.878257715 +0000 UTC m=+0.137414217 container start 2e76f6faaf5dd2e8eefd25cc9d32b983882920478bcada034a1c691e9a705bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 13:25:52 np0005596060 podman[279049]: 2026-01-26 18:25:52.881957669 +0000 UTC m=+0.141114171 container attach 2e76f6faaf5dd2e8eefd25cc9d32b983882920478bcada034a1c691e9a705bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:25:53 np0005596060 nova_compute[247421]: 2026-01-26 18:25:53.268 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:53 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:25:53.463 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:25:53 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:25:53.467 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:25:53 np0005596060 nova_compute[247421]: 2026-01-26 18:25:53.468 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]: {
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:    "1": [
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:        {
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "devices": [
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "/dev/loop3"
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            ],
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "lv_name": "ceph_lv0",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "lv_size": "7511998464",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "name": "ceph_lv0",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "tags": {
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.cluster_name": "ceph",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.crush_device_class": "",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.encrypted": "0",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.osd_id": "1",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.type": "block",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:                "ceph.vdo": "0"
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            },
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "type": "block",
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:            "vg_name": "ceph_vg0"
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:        }
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]:    ]
Jan 26 13:25:53 np0005596060 vigorous_bouman[279066]: }
Jan 26 13:25:53 np0005596060 systemd[1]: libpod-2e76f6faaf5dd2e8eefd25cc9d32b983882920478bcada034a1c691e9a705bed.scope: Deactivated successfully.
Jan 26 13:25:53 np0005596060 podman[279049]: 2026-01-26 18:25:53.67758326 +0000 UTC m=+0.936739802 container died 2e76f6faaf5dd2e8eefd25cc9d32b983882920478bcada034a1c691e9a705bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:25:53 np0005596060 systemd[1]: var-lib-containers-storage-overlay-bc85649db7d9a5d3f60343b3c60b71056cac5b5c2e5b2f66cf8b31b98db3db84-merged.mount: Deactivated successfully.
Jan 26 13:25:53 np0005596060 podman[279049]: 2026-01-26 18:25:53.750935206 +0000 UTC m=+1.010091718 container remove 2e76f6faaf5dd2e8eefd25cc9d32b983882920478bcada034a1c691e9a705bed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_bouman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Jan 26 13:25:53 np0005596060 systemd[1]: libpod-conmon-2e76f6faaf5dd2e8eefd25cc9d32b983882920478bcada034a1c691e9a705bed.scope: Deactivated successfully.
Jan 26 13:25:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:54 np0005596060 podman[279232]: 2026-01-26 18:25:54.469507077 +0000 UTC m=+0.068511804 container create 3de2f4fe21029b24657303fab6ab315e59de73a09a9704dfa9df207d24232b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaplygin, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:25:54 np0005596060 systemd[1]: Started libpod-conmon-3de2f4fe21029b24657303fab6ab315e59de73a09a9704dfa9df207d24232b1c.scope.
Jan 26 13:25:54 np0005596060 podman[279232]: 2026-01-26 18:25:54.441472444 +0000 UTC m=+0.040477211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:25:54 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:25:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:25:54 np0005596060 podman[279232]: 2026-01-26 18:25:54.634789552 +0000 UTC m=+0.233794319 container init 3de2f4fe21029b24657303fab6ab315e59de73a09a9704dfa9df207d24232b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaplygin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 13:25:54 np0005596060 podman[279232]: 2026-01-26 18:25:54.643816521 +0000 UTC m=+0.242821278 container start 3de2f4fe21029b24657303fab6ab315e59de73a09a9704dfa9df207d24232b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaplygin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:25:54 np0005596060 wizardly_chaplygin[279248]: 167 167
Jan 26 13:25:54 np0005596060 systemd[1]: libpod-3de2f4fe21029b24657303fab6ab315e59de73a09a9704dfa9df207d24232b1c.scope: Deactivated successfully.
Jan 26 13:25:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:25:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:54.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:25:54 np0005596060 podman[279232]: 2026-01-26 18:25:54.733239616 +0000 UTC m=+0.332244353 container attach 3de2f4fe21029b24657303fab6ab315e59de73a09a9704dfa9df207d24232b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:25:54 np0005596060 podman[279232]: 2026-01-26 18:25:54.733884952 +0000 UTC m=+0.332889669 container died 3de2f4fe21029b24657303fab6ab315e59de73a09a9704dfa9df207d24232b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaplygin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:25:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:54.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:55 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fd280b043f61051fd91f093da311750b34f0e6042e764c3996b9419aa74d0207-merged.mount: Deactivated successfully.
Jan 26 13:25:55 np0005596060 podman[279232]: 2026-01-26 18:25:55.436485236 +0000 UTC m=+1.035489943 container remove 3de2f4fe21029b24657303fab6ab315e59de73a09a9704dfa9df207d24232b1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 13:25:55 np0005596060 systemd[1]: libpod-conmon-3de2f4fe21029b24657303fab6ab315e59de73a09a9704dfa9df207d24232b1c.scope: Deactivated successfully.
Jan 26 13:25:55 np0005596060 podman[279273]: 2026-01-26 18:25:55.615462309 +0000 UTC m=+0.048573286 container create c9eae6e51b9bbf58f354157be51221f93c970bdbf493aa7d150a81e24cfe51a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jemison, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:25:55 np0005596060 systemd[1]: Started libpod-conmon-c9eae6e51b9bbf58f354157be51221f93c970bdbf493aa7d150a81e24cfe51a7.scope.
Jan 26 13:25:55 np0005596060 podman[279273]: 2026-01-26 18:25:55.592276109 +0000 UTC m=+0.025387076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:25:55 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:25:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d563aeacd705e5bd90942808e1136f5aca2f332c6b1fd321f300ac3ecaf5ffed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d563aeacd705e5bd90942808e1136f5aca2f332c6b1fd321f300ac3ecaf5ffed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d563aeacd705e5bd90942808e1136f5aca2f332c6b1fd321f300ac3ecaf5ffed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d563aeacd705e5bd90942808e1136f5aca2f332c6b1fd321f300ac3ecaf5ffed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:25:55 np0005596060 podman[279273]: 2026-01-26 18:25:55.711611904 +0000 UTC m=+0.144722871 container init c9eae6e51b9bbf58f354157be51221f93c970bdbf493aa7d150a81e24cfe51a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 26 13:25:55 np0005596060 podman[279273]: 2026-01-26 18:25:55.719541026 +0000 UTC m=+0.152651963 container start c9eae6e51b9bbf58f354157be51221f93c970bdbf493aa7d150a81e24cfe51a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:25:55 np0005596060 podman[279273]: 2026-01-26 18:25:55.723061946 +0000 UTC m=+0.156172893 container attach c9eae6e51b9bbf58f354157be51221f93c970bdbf493aa7d150a81e24cfe51a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:25:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:56 np0005596060 bold_jemison[279289]: {
Jan 26 13:25:56 np0005596060 bold_jemison[279289]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:25:56 np0005596060 bold_jemison[279289]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:25:56 np0005596060 bold_jemison[279289]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:25:56 np0005596060 bold_jemison[279289]:        "osd_id": 1,
Jan 26 13:25:56 np0005596060 bold_jemison[279289]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:25:56 np0005596060 bold_jemison[279289]:        "type": "bluestore"
Jan 26 13:25:56 np0005596060 bold_jemison[279289]:    }
Jan 26 13:25:56 np0005596060 bold_jemison[279289]: }
Jan 26 13:25:56 np0005596060 systemd[1]: libpod-c9eae6e51b9bbf58f354157be51221f93c970bdbf493aa7d150a81e24cfe51a7.scope: Deactivated successfully.
Jan 26 13:25:56 np0005596060 podman[279273]: 2026-01-26 18:25:56.622959579 +0000 UTC m=+1.056070516 container died c9eae6e51b9bbf58f354157be51221f93c970bdbf493aa7d150a81e24cfe51a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jemison, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:25:56 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d563aeacd705e5bd90942808e1136f5aca2f332c6b1fd321f300ac3ecaf5ffed-merged.mount: Deactivated successfully.
Jan 26 13:25:56 np0005596060 podman[279273]: 2026-01-26 18:25:56.681664332 +0000 UTC m=+1.114775269 container remove c9eae6e51b9bbf58f354157be51221f93c970bdbf493aa7d150a81e24cfe51a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 13:25:56 np0005596060 systemd[1]: libpod-conmon-c9eae6e51b9bbf58f354157be51221f93c970bdbf493aa7d150a81e24cfe51a7.scope: Deactivated successfully.
Jan 26 13:25:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:25:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:56.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:25:56 np0005596060 nova_compute[247421]: 2026-01-26 18:25:56.716 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:25:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:25:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:56.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:25:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:25:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:25:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:25:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev cdf42a68-f5bc-4d70-a769-7b02a969f0ff does not exist
Jan 26 13:25:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3e29a347-8f81-4ea5-b82a-8eb71907a1d3 does not exist
Jan 26 13:25:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4436308d-0d9d-4b3f-9f8a-0dd3f05d281b does not exist
Jan 26 13:25:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:25:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:25:58 np0005596060 nova_compute[247421]: 2026-01-26 18:25:58.381 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:25:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:25:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:25:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:25:58.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:25:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:25:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:25:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:25:58.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:25:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:00.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:00.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:01 np0005596060 nova_compute[247421]: 2026-01-26 18:26:01.719 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:02 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:26:02.469 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:26:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:02.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:02.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:03 np0005596060 nova_compute[247421]: 2026-01-26 18:26:03.385 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:26:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:26:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:04.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:04.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:05 np0005596060 podman[279430]: 2026-01-26 18:26:05.802229133 +0000 UTC m=+0.066617916 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:26:05 np0005596060 podman[279431]: 2026-01-26 18:26:05.884617449 +0000 UTC m=+0.148421207 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:26:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:06.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:06 np0005596060 nova_compute[247421]: 2026-01-26 18:26:06.719 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:06.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:07 np0005596060 nova_compute[247421]: 2026-01-26 18:26:07.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:26:07 np0005596060 nova_compute[247421]: 2026-01-26 18:26:07.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:26:07 np0005596060 nova_compute[247421]: 2026-01-26 18:26:07.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:26:07 np0005596060 nova_compute[247421]: 2026-01-26 18:26:07.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:26:08 np0005596060 nova_compute[247421]: 2026-01-26 18:26:08.388 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:08 np0005596060 nova_compute[247421]: 2026-01-26 18:26:08.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:26:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:08.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:08.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Jan 26 13:26:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Jan 26 13:26:09 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Jan 26 13:26:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:10 np0005596060 nova_compute[247421]: 2026-01-26 18:26:10.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:26:10 np0005596060 nova_compute[247421]: 2026-01-26 18:26:10.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:26:10 np0005596060 nova_compute[247421]: 2026-01-26 18:26:10.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:26:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:26:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:10.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:26:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:10.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:10 np0005596060 nova_compute[247421]: 2026-01-26 18:26:10.894 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:26:10 np0005596060 nova_compute[247421]: 2026-01-26 18:26:10.894 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:26:11 np0005596060 nova_compute[247421]: 2026-01-26 18:26:11.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:26:11 np0005596060 nova_compute[247421]: 2026-01-26 18:26:11.721 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 26 13:26:12 np0005596060 nova_compute[247421]: 2026-01-26 18:26:12.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:26:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:26:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:12.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:26:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:12.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:13 np0005596060 nova_compute[247421]: 2026-01-26 18:26:13.389 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:26:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:26:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:26:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:26:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:26:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:26:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 26 13:26:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:14 np0005596060 nova_compute[247421]: 2026-01-26 18:26:14.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:26:14 np0005596060 nova_compute[247421]: 2026-01-26 18:26:14.683 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:26:14 np0005596060 nova_compute[247421]: 2026-01-26 18:26:14.683 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:26:14 np0005596060 nova_compute[247421]: 2026-01-26 18:26:14.683 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:26:14 np0005596060 nova_compute[247421]: 2026-01-26 18:26:14.684 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:26:14 np0005596060 nova_compute[247421]: 2026-01-26 18:26:14.684 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:26:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:26:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:14.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:26:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:26:14.755 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:26:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:26:14.756 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:26:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:26:14.756 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:26:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:14.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:26:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4076827704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:26:15 np0005596060 nova_compute[247421]: 2026-01-26 18:26:15.099 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:26:15 np0005596060 nova_compute[247421]: 2026-01-26 18:26:15.248 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:26:15 np0005596060 nova_compute[247421]: 2026-01-26 18:26:15.249 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4800MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:26:15 np0005596060 nova_compute[247421]: 2026-01-26 18:26:15.249 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:26:15 np0005596060 nova_compute[247421]: 2026-01-26 18:26:15.250 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:26:15 np0005596060 nova_compute[247421]: 2026-01-26 18:26:15.607 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:26:15 np0005596060 nova_compute[247421]: 2026-01-26 18:26:15.607 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:26:15 np0005596060 nova_compute[247421]: 2026-01-26 18:26:15.662 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:26:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:26:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1206822783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:26:16 np0005596060 nova_compute[247421]: 2026-01-26 18:26:16.159 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:26:16 np0005596060 nova_compute[247421]: 2026-01-26 18:26:16.165 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:26:16 np0005596060 nova_compute[247421]: 2026-01-26 18:26:16.320 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:26:16 np0005596060 nova_compute[247421]: 2026-01-26 18:26:16.323 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:26:16 np0005596060 nova_compute[247421]: 2026-01-26 18:26:16.323 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:26:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Jan 26 13:26:16 np0005596060 nova_compute[247421]: 2026-01-26 18:26:16.723 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:16.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:16.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Jan 26 13:26:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Jan 26 13:26:17 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Jan 26 13:26:18 np0005596060 nova_compute[247421]: 2026-01-26 18:26:18.391 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 2.4 MiB/s wr, 45 op/s
Jan 26 13:26:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:18.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:18.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 2.0 MiB/s wr, 39 op/s
Jan 26 13:26:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:20.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:20.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:21 np0005596060 nova_compute[247421]: 2026-01-26 18:26:21.725 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:22 np0005596060 nova_compute[247421]: 2026-01-26 18:26:22.324 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:26:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Jan 26 13:26:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Jan 26 13:26:22 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Jan 26 13:26:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 49 MiB data, 258 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 1.0 MiB/s wr, 53 op/s
Jan 26 13:26:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:22.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:22.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:23 np0005596060 nova_compute[247421]: 2026-01-26 18:26:23.392 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 49 MiB data, 258 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 1.0 MiB/s wr, 53 op/s
Jan 26 13:26:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Jan 26 13:26:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Jan 26 13:26:24 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Jan 26 13:26:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:24.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:24.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 57 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Jan 26 13:26:26 np0005596060 nova_compute[247421]: 2026-01-26 18:26:26.729 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:26:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:26.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:26:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:26.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:28 np0005596060 nova_compute[247421]: 2026-01-26 18:26:28.393 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 2.6 MiB/s wr, 51 op/s
Jan 26 13:26:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:28.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:28.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.6 MiB/s wr, 48 op/s
Jan 26 13:26:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:30.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:30.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:31 np0005596060 nova_compute[247421]: 2026-01-26 18:26:31.381 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:31 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:26:31.382 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:26:31 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:26:31.384 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:26:31 np0005596060 nova_compute[247421]: 2026-01-26 18:26:31.732 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Jan 26 13:26:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Jan 26 13:26:32 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Jan 26 13:26:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.6 MiB/s wr, 23 op/s
Jan 26 13:26:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:32.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:32.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:33 np0005596060 nova_compute[247421]: 2026-01-26 18:26:33.394 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 62 MiB data, 271 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Jan 26 13:26:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:34.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:34.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:36 np0005596060 podman[279610]: 2026-01-26 18:26:36.138626321 +0000 UTC m=+0.051507232 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 13:26:36 np0005596060 podman[279611]: 2026-01-26 18:26:36.174092753 +0000 UTC m=+0.082817408 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:26:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 54 MiB data, 263 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 456 KiB/s wr, 23 op/s
Jan 26 13:26:36 np0005596060 nova_compute[247421]: 2026-01-26 18:26:36.733 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:36.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:36.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:38 np0005596060 nova_compute[247421]: 2026-01-26 18:26:38.395 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 26 13:26:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:38.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:38.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Jan 26 13:26:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Jan 26 13:26:39 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Jan 26 13:26:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Jan 26 13:26:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:26:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:40.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:26:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:40.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:41 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:26:41.387 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:26:41 np0005596060 nova_compute[247421]: 2026-01-26 18:26:41.735 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Jan 26 13:26:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:42.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:42.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:43 np0005596060 nova_compute[247421]: 2026-01-26 18:26:43.397 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:26:44
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'vms', '.mgr', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Jan 26 13:26:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:44.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:26:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:26:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:44.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:26:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:26:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 204 B/s wr, 1 op/s
Jan 26 13:26:46 np0005596060 nova_compute[247421]: 2026-01-26 18:26:46.737 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:46.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:26:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:46.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:26:48 np0005596060 nova_compute[247421]: 2026-01-26 18:26:48.398 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:48.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:50.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:51 np0005596060 nova_compute[247421]: 2026-01-26 18:26:51.740 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:52.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:52.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:53 np0005596060 nova_compute[247421]: 2026-01-26 18:26:53.400 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:54.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:54.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:56 np0005596060 nova_compute[247421]: 2026-01-26 18:26:56.741 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:56.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:56.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:26:58 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 05a53d4a-0f25-4341-9f88-8571553b679e does not exist
Jan 26 13:26:58 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d1ebb5e4-d705-4cd7-9b82-ddad2cc7823e does not exist
Jan 26 13:26:58 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d333ebf0-606e-408d-acf6-9e85410d1a7c does not exist
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:26:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:26:58 np0005596060 nova_compute[247421]: 2026-01-26 18:26:58.402 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:26:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:26:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:26:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:26:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:26:58 np0005596060 podman[280012]: 2026-01-26 18:26:58.809316629 +0000 UTC m=+0.060882360 container create 8074960d69207d5d3d52793fecd2c5977aa83ecd001565313a20bfee59df8f85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:26:58 np0005596060 systemd[1]: Started libpod-conmon-8074960d69207d5d3d52793fecd2c5977aa83ecd001565313a20bfee59df8f85.scope.
Jan 26 13:26:58 np0005596060 podman[280012]: 2026-01-26 18:26:58.773031636 +0000 UTC m=+0.024597397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:26:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:26:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:26:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:26:58.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:26:58 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:26:58 np0005596060 podman[280012]: 2026-01-26 18:26:58.903165386 +0000 UTC m=+0.154731137 container init 8074960d69207d5d3d52793fecd2c5977aa83ecd001565313a20bfee59df8f85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:26:58 np0005596060 podman[280012]: 2026-01-26 18:26:58.912710239 +0000 UTC m=+0.164275970 container start 8074960d69207d5d3d52793fecd2c5977aa83ecd001565313a20bfee59df8f85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:26:58 np0005596060 podman[280012]: 2026-01-26 18:26:58.915555162 +0000 UTC m=+0.167120923 container attach 8074960d69207d5d3d52793fecd2c5977aa83ecd001565313a20bfee59df8f85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:26:58 np0005596060 nice_bhaskara[280028]: 167 167
Jan 26 13:26:58 np0005596060 systemd[1]: libpod-8074960d69207d5d3d52793fecd2c5977aa83ecd001565313a20bfee59df8f85.scope: Deactivated successfully.
Jan 26 13:26:58 np0005596060 conmon[280028]: conmon 8074960d69207d5d3d52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8074960d69207d5d3d52793fecd2c5977aa83ecd001565313a20bfee59df8f85.scope/container/memory.events
Jan 26 13:26:58 np0005596060 podman[280012]: 2026-01-26 18:26:58.921283767 +0000 UTC m=+0.172849498 container died 8074960d69207d5d3d52793fecd2c5977aa83ecd001565313a20bfee59df8f85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:26:58 np0005596060 systemd[1]: var-lib-containers-storage-overlay-595bf507e98f0f4948688a5ead4766432fc93c0316e6fe5048b25694add33c95-merged.mount: Deactivated successfully.
Jan 26 13:26:58 np0005596060 podman[280012]: 2026-01-26 18:26:58.961339286 +0000 UTC m=+0.212905017 container remove 8074960d69207d5d3d52793fecd2c5977aa83ecd001565313a20bfee59df8f85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bhaskara, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:26:58 np0005596060 systemd[1]: libpod-conmon-8074960d69207d5d3d52793fecd2c5977aa83ecd001565313a20bfee59df8f85.scope: Deactivated successfully.
Jan 26 13:26:59 np0005596060 podman[280053]: 2026-01-26 18:26:59.12776263 +0000 UTC m=+0.043055536 container create fd20f81de593e1bc7fffcc430bedc25521a62799759ebee88e1280b210a58729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 13:26:59 np0005596060 systemd[1]: Started libpod-conmon-fd20f81de593e1bc7fffcc430bedc25521a62799759ebee88e1280b210a58729.scope.
Jan 26 13:26:59 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:26:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec71b5348dab7b788fec78f65d87c90a87404d5043472aaa886d208c7338d861/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:26:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec71b5348dab7b788fec78f65d87c90a87404d5043472aaa886d208c7338d861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:26:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec71b5348dab7b788fec78f65d87c90a87404d5043472aaa886d208c7338d861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:26:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec71b5348dab7b788fec78f65d87c90a87404d5043472aaa886d208c7338d861/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:26:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec71b5348dab7b788fec78f65d87c90a87404d5043472aaa886d208c7338d861/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:26:59 np0005596060 podman[280053]: 2026-01-26 18:26:59.109121846 +0000 UTC m=+0.024414782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:26:59 np0005596060 podman[280053]: 2026-01-26 18:26:59.208831733 +0000 UTC m=+0.124124639 container init fd20f81de593e1bc7fffcc430bedc25521a62799759ebee88e1280b210a58729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_murdock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:26:59 np0005596060 podman[280053]: 2026-01-26 18:26:59.215323388 +0000 UTC m=+0.130616284 container start fd20f81de593e1bc7fffcc430bedc25521a62799759ebee88e1280b210a58729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_murdock, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:26:59 np0005596060 podman[280053]: 2026-01-26 18:26:59.218201201 +0000 UTC m=+0.133494107 container attach fd20f81de593e1bc7fffcc430bedc25521a62799759ebee88e1280b210a58729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_murdock, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:26:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:26:59 np0005596060 charming_murdock[280069]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:26:59 np0005596060 charming_murdock[280069]: --> relative data size: 1.0
Jan 26 13:26:59 np0005596060 charming_murdock[280069]: --> All data devices are unavailable
Jan 26 13:27:00 np0005596060 systemd[1]: libpod-fd20f81de593e1bc7fffcc430bedc25521a62799759ebee88e1280b210a58729.scope: Deactivated successfully.
Jan 26 13:27:00 np0005596060 podman[280053]: 2026-01-26 18:27:00.016959331 +0000 UTC m=+0.932252237 container died fd20f81de593e1bc7fffcc430bedc25521a62799759ebee88e1280b210a58729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_murdock, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:27:00 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ec71b5348dab7b788fec78f65d87c90a87404d5043472aaa886d208c7338d861-merged.mount: Deactivated successfully.
Jan 26 13:27:00 np0005596060 podman[280053]: 2026-01-26 18:27:00.072217347 +0000 UTC m=+0.987510253 container remove fd20f81de593e1bc7fffcc430bedc25521a62799759ebee88e1280b210a58729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_murdock, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:27:00 np0005596060 systemd[1]: libpod-conmon-fd20f81de593e1bc7fffcc430bedc25521a62799759ebee88e1280b210a58729.scope: Deactivated successfully.
Jan 26 13:27:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:00 np0005596060 podman[280239]: 2026-01-26 18:27:00.704583293 +0000 UTC m=+0.047002787 container create 037434a37666f3b0f398f742796f77ecc072f07dfa95297284a7f1d223b6b028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_snyder, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 13:27:00 np0005596060 systemd[1]: Started libpod-conmon-037434a37666f3b0f398f742796f77ecc072f07dfa95297284a7f1d223b6b028.scope.
Jan 26 13:27:00 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:27:00 np0005596060 podman[280239]: 2026-01-26 18:27:00.683006194 +0000 UTC m=+0.025425728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:27:00 np0005596060 podman[280239]: 2026-01-26 18:27:00.789697238 +0000 UTC m=+0.132116722 container init 037434a37666f3b0f398f742796f77ecc072f07dfa95297284a7f1d223b6b028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 13:27:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:00.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:00 np0005596060 podman[280239]: 2026-01-26 18:27:00.799345404 +0000 UTC m=+0.141764888 container start 037434a37666f3b0f398f742796f77ecc072f07dfa95297284a7f1d223b6b028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_snyder, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:27:00 np0005596060 cool_snyder[280256]: 167 167
Jan 26 13:27:00 np0005596060 systemd[1]: libpod-037434a37666f3b0f398f742796f77ecc072f07dfa95297284a7f1d223b6b028.scope: Deactivated successfully.
Jan 26 13:27:00 np0005596060 podman[280239]: 2026-01-26 18:27:00.805351586 +0000 UTC m=+0.147771090 container attach 037434a37666f3b0f398f742796f77ecc072f07dfa95297284a7f1d223b6b028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_snyder, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:27:00 np0005596060 podman[280239]: 2026-01-26 18:27:00.8074579 +0000 UTC m=+0.149877384 container died 037434a37666f3b0f398f742796f77ecc072f07dfa95297284a7f1d223b6b028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:27:00 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4d3ca161bd9897d69fb70d91ba6457b00889999c7851a4cabb4cd6a2df0632b7-merged.mount: Deactivated successfully.
Jan 26 13:27:00 np0005596060 podman[280239]: 2026-01-26 18:27:00.86799686 +0000 UTC m=+0.210416344 container remove 037434a37666f3b0f398f742796f77ecc072f07dfa95297284a7f1d223b6b028 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 13:27:00 np0005596060 systemd[1]: libpod-conmon-037434a37666f3b0f398f742796f77ecc072f07dfa95297284a7f1d223b6b028.scope: Deactivated successfully.
Jan 26 13:27:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:00.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:01 np0005596060 podman[280282]: 2026-01-26 18:27:01.022632934 +0000 UTC m=+0.039280690 container create 9f9e3929b6a54407c9b6c3e7033c59b6bdab89c44d1372561ba41c5c04e9e5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 13:27:01 np0005596060 systemd[1]: Started libpod-conmon-9f9e3929b6a54407c9b6c3e7033c59b6bdab89c44d1372561ba41c5c04e9e5cb.scope.
Jan 26 13:27:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:27:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f604e86432e7a67262bdf78b97184b7b186d3fe402707180e1bab0a2400d9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:27:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f604e86432e7a67262bdf78b97184b7b186d3fe402707180e1bab0a2400d9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:27:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f604e86432e7a67262bdf78b97184b7b186d3fe402707180e1bab0a2400d9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:27:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00f604e86432e7a67262bdf78b97184b7b186d3fe402707180e1bab0a2400d9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:27:01 np0005596060 podman[280282]: 2026-01-26 18:27:01.097602461 +0000 UTC m=+0.114250237 container init 9f9e3929b6a54407c9b6c3e7033c59b6bdab89c44d1372561ba41c5c04e9e5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 13:27:01 np0005596060 podman[280282]: 2026-01-26 18:27:01.006150715 +0000 UTC m=+0.022798491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:27:01 np0005596060 podman[280282]: 2026-01-26 18:27:01.104753303 +0000 UTC m=+0.121401069 container start 9f9e3929b6a54407c9b6c3e7033c59b6bdab89c44d1372561ba41c5c04e9e5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:27:01 np0005596060 podman[280282]: 2026-01-26 18:27:01.108493088 +0000 UTC m=+0.125140844 container attach 9f9e3929b6a54407c9b6c3e7033c59b6bdab89c44d1372561ba41c5c04e9e5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 13:27:01 np0005596060 nova_compute[247421]: 2026-01-26 18:27:01.744 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]: {
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:    "1": [
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:        {
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "devices": [
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "/dev/loop3"
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            ],
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "lv_name": "ceph_lv0",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "lv_size": "7511998464",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "name": "ceph_lv0",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "tags": {
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.cluster_name": "ceph",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.crush_device_class": "",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.encrypted": "0",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.osd_id": "1",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.type": "block",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:                "ceph.vdo": "0"
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            },
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "type": "block",
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:            "vg_name": "ceph_vg0"
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:        }
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]:    ]
Jan 26 13:27:01 np0005596060 epic_mahavira[280299]: }
Jan 26 13:27:01 np0005596060 systemd[1]: libpod-9f9e3929b6a54407c9b6c3e7033c59b6bdab89c44d1372561ba41c5c04e9e5cb.scope: Deactivated successfully.
Jan 26 13:27:01 np0005596060 podman[280282]: 2026-01-26 18:27:01.910575023 +0000 UTC m=+0.927222779 container died 9f9e3929b6a54407c9b6c3e7033c59b6bdab89c44d1372561ba41c5c04e9e5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:27:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-00f604e86432e7a67262bdf78b97184b7b186d3fe402707180e1bab0a2400d9a-merged.mount: Deactivated successfully.
Jan 26 13:27:02 np0005596060 podman[280282]: 2026-01-26 18:27:02.031349505 +0000 UTC m=+1.047997261 container remove 9f9e3929b6a54407c9b6c3e7033c59b6bdab89c44d1372561ba41c5c04e9e5cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mahavira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:27:02 np0005596060 systemd[1]: libpod-conmon-9f9e3929b6a54407c9b6c3e7033c59b6bdab89c44d1372561ba41c5c04e9e5cb.scope: Deactivated successfully.
Jan 26 13:27:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:02 np0005596060 podman[280459]: 2026-01-26 18:27:02.638581013 +0000 UTC m=+0.023004436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:27:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:27:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:02.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:27:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:02 np0005596060 podman[280459]: 2026-01-26 18:27:02.950701673 +0000 UTC m=+0.335125096 container create 156d5cc6270ccc1f955ce328c37d730404950d276bde67faaa3ab5053f7b8be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:27:03 np0005596060 systemd[1]: Started libpod-conmon-156d5cc6270ccc1f955ce328c37d730404950d276bde67faaa3ab5053f7b8be0.scope.
Jan 26 13:27:03 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:27:03 np0005596060 podman[280459]: 2026-01-26 18:27:03.192460074 +0000 UTC m=+0.576883497 container init 156d5cc6270ccc1f955ce328c37d730404950d276bde67faaa3ab5053f7b8be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_taussig, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:27:03 np0005596060 podman[280459]: 2026-01-26 18:27:03.200997401 +0000 UTC m=+0.585420804 container start 156d5cc6270ccc1f955ce328c37d730404950d276bde67faaa3ab5053f7b8be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_taussig, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:27:03 np0005596060 magical_taussig[280475]: 167 167
Jan 26 13:27:03 np0005596060 systemd[1]: libpod-156d5cc6270ccc1f955ce328c37d730404950d276bde67faaa3ab5053f7b8be0.scope: Deactivated successfully.
Jan 26 13:27:03 np0005596060 podman[280459]: 2026-01-26 18:27:03.321466676 +0000 UTC m=+0.705890079 container attach 156d5cc6270ccc1f955ce328c37d730404950d276bde67faaa3ab5053f7b8be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_taussig, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:27:03 np0005596060 podman[280459]: 2026-01-26 18:27:03.321863376 +0000 UTC m=+0.706286809 container died 156d5cc6270ccc1f955ce328c37d730404950d276bde67faaa3ab5053f7b8be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_taussig, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 26 13:27:03 np0005596060 nova_compute[247421]: 2026-01-26 18:27:03.404 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:03 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e3dc90251cfb399eae6c5a4141c85b36a14299ba4b8a45705ecf35e8a5f9a8eb-merged.mount: Deactivated successfully.
Jan 26 13:27:03 np0005596060 podman[280459]: 2026-01-26 18:27:03.649729996 +0000 UTC m=+1.034153399 container remove 156d5cc6270ccc1f955ce328c37d730404950d276bde67faaa3ab5053f7b8be0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_taussig, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:27:03 np0005596060 systemd[1]: libpod-conmon-156d5cc6270ccc1f955ce328c37d730404950d276bde67faaa3ab5053f7b8be0.scope: Deactivated successfully.
Jan 26 13:27:03 np0005596060 podman[280499]: 2026-01-26 18:27:03.808473984 +0000 UTC m=+0.046385041 container create a6bbb4e95ea047ec1d8a1c0409de289be64d473de591d5db920102d34381509c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:27:03 np0005596060 systemd[1]: Started libpod-conmon-a6bbb4e95ea047ec1d8a1c0409de289be64d473de591d5db920102d34381509c.scope.
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:27:03 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:27:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3296dafa6be2fceeb4d01ed75d1e49fa8b3d7c25fb13b393954afa37c630c399/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:27:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:27:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3296dafa6be2fceeb4d01ed75d1e49fa8b3d7c25fb13b393954afa37c630c399/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:27:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3296dafa6be2fceeb4d01ed75d1e49fa8b3d7c25fb13b393954afa37c630c399/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:27:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3296dafa6be2fceeb4d01ed75d1e49fa8b3d7c25fb13b393954afa37c630c399/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:27:03 np0005596060 podman[280499]: 2026-01-26 18:27:03.78551863 +0000 UTC m=+0.023429757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:27:03 np0005596060 podman[280499]: 2026-01-26 18:27:03.923711925 +0000 UTC m=+0.161623002 container init a6bbb4e95ea047ec1d8a1c0409de289be64d473de591d5db920102d34381509c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:27:03 np0005596060 podman[280499]: 2026-01-26 18:27:03.929510473 +0000 UTC m=+0.167421530 container start a6bbb4e95ea047ec1d8a1c0409de289be64d473de591d5db920102d34381509c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:27:04 np0005596060 podman[280499]: 2026-01-26 18:27:04.002070139 +0000 UTC m=+0.239981216 container attach a6bbb4e95ea047ec1d8a1c0409de289be64d473de591d5db920102d34381509c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 26 13:27:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:04 np0005596060 epic_ellis[280517]: {
Jan 26 13:27:04 np0005596060 epic_ellis[280517]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:27:04 np0005596060 epic_ellis[280517]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:27:04 np0005596060 epic_ellis[280517]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:27:04 np0005596060 epic_ellis[280517]:        "osd_id": 1,
Jan 26 13:27:04 np0005596060 epic_ellis[280517]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:27:04 np0005596060 epic_ellis[280517]:        "type": "bluestore"
Jan 26 13:27:04 np0005596060 epic_ellis[280517]:    }
Jan 26 13:27:04 np0005596060 epic_ellis[280517]: }
Jan 26 13:27:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:04 np0005596060 systemd[1]: libpod-a6bbb4e95ea047ec1d8a1c0409de289be64d473de591d5db920102d34381509c.scope: Deactivated successfully.
Jan 26 13:27:04 np0005596060 podman[280540]: 2026-01-26 18:27:04.843799412 +0000 UTC m=+0.024417292 container died a6bbb4e95ea047ec1d8a1c0409de289be64d473de591d5db920102d34381509c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:27:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3296dafa6be2fceeb4d01ed75d1e49fa8b3d7c25fb13b393954afa37c630c399-merged.mount: Deactivated successfully.
Jan 26 13:27:04 np0005596060 podman[280540]: 2026-01-26 18:27:04.887539825 +0000 UTC m=+0.068157705 container remove a6bbb4e95ea047ec1d8a1c0409de289be64d473de591d5db920102d34381509c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_ellis, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 13:27:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:04 np0005596060 systemd[1]: libpod-conmon-a6bbb4e95ea047ec1d8a1c0409de289be64d473de591d5db920102d34381509c.scope: Deactivated successfully.
Jan 26 13:27:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:27:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:27:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:27:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:27:05 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3dd2ccce-999b-496d-b6f2-268f8e2bcfe4 does not exist
Jan 26 13:27:05 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9cd6e229-02fc-495a-94df-e27cffaca4c9 does not exist
Jan 26 13:27:05 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 03b54ad1-dc67-48c9-a39c-84f2fd5baa80 does not exist
Jan 26 13:27:05 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:27:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:06 np0005596060 nova_compute[247421]: 2026-01-26 18:27:06.746 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:06 np0005596060 podman[280606]: 2026-01-26 18:27:06.79537791 +0000 UTC m=+0.055076823 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 13:27:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:06.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:06.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:06 np0005596060 podman[280607]: 2026-01-26 18:27:06.909220356 +0000 UTC m=+0.168757894 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:27:07 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:27:08 np0005596060 nova_compute[247421]: 2026-01-26 18:27:08.407 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:08 np0005596060 nova_compute[247421]: 2026-01-26 18:27:08.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:08.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:08.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:09 np0005596060 nova_compute[247421]: 2026-01-26 18:27:09.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:09 np0005596060 nova_compute[247421]: 2026-01-26 18:27:09.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:09 np0005596060 nova_compute[247421]: 2026-01-26 18:27:09.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:27:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:27:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:10.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:27:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:27:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:10.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:27:11 np0005596060 nova_compute[247421]: 2026-01-26 18:27:11.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:11 np0005596060 nova_compute[247421]: 2026-01-26 18:27:11.749 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:12 np0005596060 nova_compute[247421]: 2026-01-26 18:27:12.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:12 np0005596060 nova_compute[247421]: 2026-01-26 18:27:12.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:27:12 np0005596060 nova_compute[247421]: 2026-01-26 18:27:12.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:27:12 np0005596060 nova_compute[247421]: 2026-01-26 18:27:12.667 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:27:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:12.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:12.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:13 np0005596060 nova_compute[247421]: 2026-01-26 18:27:13.409 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:13 np0005596060 nova_compute[247421]: 2026-01-26 18:27:13.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:27:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:27:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:27:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:27:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:27:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:27:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:14 np0005596060 nova_compute[247421]: 2026-01-26 18:27:14.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:27:14.756 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:27:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:27:14.757 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:27:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:27:14.757 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:27:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:27:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:14.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:27:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:14.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:16 np0005596060 nova_compute[247421]: 2026-01-26 18:27:16.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:16 np0005596060 nova_compute[247421]: 2026-01-26 18:27:16.750 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:16.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:16.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:16 np0005596060 nova_compute[247421]: 2026-01-26 18:27:16.909 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:27:16 np0005596060 nova_compute[247421]: 2026-01-26 18:27:16.910 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:27:16 np0005596060 nova_compute[247421]: 2026-01-26 18:27:16.910 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:27:16 np0005596060 nova_compute[247421]: 2026-01-26 18:27:16.910 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:27:16 np0005596060 nova_compute[247421]: 2026-01-26 18:27:16.910 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:27:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:27:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1721863374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.346 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.502 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.503 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4800MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.503 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.504 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.528 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:27:17.528 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:27:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:27:17.529 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.645 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.645 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.738 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing inventories for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.840 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating ProviderTree inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.841 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.879 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing aggregate associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.907 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing trait associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 26 13:27:17 np0005596060 nova_compute[247421]: 2026-01-26 18:27:17.940 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:27:18 np0005596060 nova_compute[247421]: 2026-01-26 18:27:18.412 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:27:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310509679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:27:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:18 np0005596060 nova_compute[247421]: 2026-01-26 18:27:18.498 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:27:18 np0005596060 nova_compute[247421]: 2026-01-26 18:27:18.506 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:27:18 np0005596060 nova_compute[247421]: 2026-01-26 18:27:18.579 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:27:18 np0005596060 nova_compute[247421]: 2026-01-26 18:27:18.581 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:27:18 np0005596060 nova_compute[247421]: 2026-01-26 18:27:18.581 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:27:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:18.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:18.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:20.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:20.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:21 np0005596060 nova_compute[247421]: 2026-01-26 18:27:21.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:21 np0005596060 nova_compute[247421]: 2026-01-26 18:27:21.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:21 np0005596060 nova_compute[247421]: 2026-01-26 18:27:21.751 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:22 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:27:22.530 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:27:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:27:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:22.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:27:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:22.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:23 np0005596060 nova_compute[247421]: 2026-01-26 18:27:23.414 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:24.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:24.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:25 np0005596060 ceph-osd[84834]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 26 13:27:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1617: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 170 B/s wr, 6 op/s
Jan 26 13:27:26 np0005596060 nova_compute[247421]: 2026-01-26 18:27:26.663 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:26 np0005596060 nova_compute[247421]: 2026-01-26 18:27:26.663 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 26 13:27:26 np0005596060 nova_compute[247421]: 2026-01-26 18:27:26.752 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:26.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:26.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:28 np0005596060 nova_compute[247421]: 2026-01-26 18:27:28.415 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 26 13:27:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:28.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:28.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:29 np0005596060 nova_compute[247421]: 2026-01-26 18:27:29.670 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:27:29 np0005596060 nova_compute[247421]: 2026-01-26 18:27:29.670 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 26 13:27:29 np0005596060 nova_compute[247421]: 2026-01-26 18:27:29.691 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 26 13:27:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:29 np0005596060 ceph-osd[84834]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 26 13:27:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 41 MiB data, 250 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 597 B/s wr, 11 op/s
Jan 26 13:27:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:27:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:30.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:27:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:30.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:31 np0005596060 nova_compute[247421]: 2026-01-26 18:27:31.754 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 26 13:27:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:32.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:32.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:33 np0005596060 nova_compute[247421]: 2026-01-26 18:27:33.417 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 26 13:27:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:34.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:27:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:34.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:27:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 26 13:27:36 np0005596060 nova_compute[247421]: 2026-01-26 18:27:36.756 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:36.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:27:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:36.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:27:37 np0005596060 podman[280807]: 2026-01-26 18:27:37.797034521 +0000 UTC m=+0.059064313 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 13:27:37 np0005596060 podman[280808]: 2026-01-26 18:27:37.829032465 +0000 UTC m=+0.086570483 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 13:27:38 np0005596060 nova_compute[247421]: 2026-01-26 18:27:38.457 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 404 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Jan 26 13:27:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:27:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:38.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:27:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:38.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 13:27:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:27:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:40.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:27:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:27:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:40.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:27:41 np0005596060 nova_compute[247421]: 2026-01-26 18:27:41.758 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 26 13:27:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:27:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:42.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:27:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:42.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:43 np0005596060 nova_compute[247421]: 2026-01-26 18:27:43.458 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:27:44
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'backups', 'images', 'volumes', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:44.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:27:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:27:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:27:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:44.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:27:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:46 np0005596060 nova_compute[247421]: 2026-01-26 18:27:46.760 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:46.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:27:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:46.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:27:48 np0005596060 nova_compute[247421]: 2026-01-26 18:27:48.461 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:48.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:48.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:27:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:50.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:27:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:50.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:51 np0005596060 nova_compute[247421]: 2026-01-26 18:27:51.762 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:52.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:52.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:53 np0005596060 nova_compute[247421]: 2026-01-26 18:27:53.463 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:27:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:54.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:56 np0005596060 nova_compute[247421]: 2026-01-26 18:27:56.764 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 13:27:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:56.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 13:27:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:56.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:57 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:27:57.446 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:27:57 np0005596060 nova_compute[247421]: 2026-01-26 18:27:57.446 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:57 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:27:57.447 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:27:58 np0005596060 nova_compute[247421]: 2026-01-26 18:27:58.464 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:27:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:27:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:27:58.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:27:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:27:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:27:58.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:27:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:28:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:00.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:00.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:01 np0005596060 nova_compute[247421]: 2026-01-26 18:28:01.766 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 10 op/s
Jan 26 13:28:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:02.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:02.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:03 np0005596060 nova_compute[247421]: 2026-01-26 18:28:03.466 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:28:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:28:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 10 op/s
Jan 26 13:28:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:04.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 597 B/s wr, 17 op/s
Jan 26 13:28:06 np0005596060 nova_compute[247421]: 2026-01-26 18:28:06.768 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:06.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:06.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:28:07 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:28:07.449 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:28:07 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 97287301-a68d-4508-89ea-e7fa221044cb does not exist
Jan 26 13:28:07 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9c611aca-47b8-40b9-8cf0-e4d13f1ec1ec does not exist
Jan 26 13:28:07 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b52c644c-a9b9-406b-aa3a-c4d3dbbe562e does not exist
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:28:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:28:07 np0005596060 podman[281097]: 2026-01-26 18:28:07.956044505 +0000 UTC m=+0.055896343 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 26 13:28:08 np0005596060 podman[281098]: 2026-01-26 18:28:08.019965651 +0000 UTC m=+0.117687364 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 13:28:08 np0005596060 podman[281235]: 2026-01-26 18:28:08.335306213 +0000 UTC m=+0.047690285 container create 24c8da2a8c79b63c3673887c3e29ba8c59241f2f7f932db943841d8fe971f31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:28:08 np0005596060 systemd[1]: Started libpod-conmon-24c8da2a8c79b63c3673887c3e29ba8c59241f2f7f932db943841d8fe971f31e.scope.
Jan 26 13:28:08 np0005596060 podman[281235]: 2026-01-26 18:28:08.313238131 +0000 UTC m=+0.025622233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:28:08 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:28:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:28:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:28:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:28:08 np0005596060 podman[281235]: 2026-01-26 18:28:08.433712856 +0000 UTC m=+0.146096948 container init 24c8da2a8c79b63c3673887c3e29ba8c59241f2f7f932db943841d8fe971f31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:28:08 np0005596060 podman[281235]: 2026-01-26 18:28:08.442951791 +0000 UTC m=+0.155335863 container start 24c8da2a8c79b63c3673887c3e29ba8c59241f2f7f932db943841d8fe971f31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:28:08 np0005596060 podman[281235]: 2026-01-26 18:28:08.446253035 +0000 UTC m=+0.158637137 container attach 24c8da2a8c79b63c3673887c3e29ba8c59241f2f7f932db943841d8fe971f31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 13:28:08 np0005596060 pedantic_mccarthy[281251]: 167 167
Jan 26 13:28:08 np0005596060 systemd[1]: libpod-24c8da2a8c79b63c3673887c3e29ba8c59241f2f7f932db943841d8fe971f31e.scope: Deactivated successfully.
Jan 26 13:28:08 np0005596060 podman[281235]: 2026-01-26 18:28:08.450441512 +0000 UTC m=+0.162825584 container died 24c8da2a8c79b63c3673887c3e29ba8c59241f2f7f932db943841d8fe971f31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 13:28:08 np0005596060 nova_compute[247421]: 2026-01-26 18:28:08.468 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:08 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a57a4e4fdff84023ef7e79d6bcbf91624008b6e421e61cd5a6e9414f560f1f92-merged.mount: Deactivated successfully.
Jan 26 13:28:08 np0005596060 podman[281235]: 2026-01-26 18:28:08.491587158 +0000 UTC m=+0.203971220 container remove 24c8da2a8c79b63c3673887c3e29ba8c59241f2f7f932db943841d8fe971f31e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:28:08 np0005596060 systemd[1]: libpod-conmon-24c8da2a8c79b63c3673887c3e29ba8c59241f2f7f932db943841d8fe971f31e.scope: Deactivated successfully.
Jan 26 13:28:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 597 B/s wr, 25 op/s
Jan 26 13:28:08 np0005596060 ceph-osd[84834]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 26 13:28:08 np0005596060 podman[281275]: 2026-01-26 18:28:08.65872105 +0000 UTC m=+0.046005561 container create 930de56cd6190e1295857f01c8eb7bedb69b6fe28476860a3f03a7a00c4017a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 13:28:08 np0005596060 systemd[1]: Started libpod-conmon-930de56cd6190e1295857f01c8eb7bedb69b6fe28476860a3f03a7a00c4017a8.scope.
Jan 26 13:28:08 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:28:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0bf6e83d0cf91e7502a405b7ce7f059821aecdddc846edd342aa341f885d7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:08 np0005596060 podman[281275]: 2026-01-26 18:28:08.638799303 +0000 UTC m=+0.026083844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:28:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0bf6e83d0cf91e7502a405b7ce7f059821aecdddc846edd342aa341f885d7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0bf6e83d0cf91e7502a405b7ce7f059821aecdddc846edd342aa341f885d7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0bf6e83d0cf91e7502a405b7ce7f059821aecdddc846edd342aa341f885d7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e0bf6e83d0cf91e7502a405b7ce7f059821aecdddc846edd342aa341f885d7b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:08 np0005596060 podman[281275]: 2026-01-26 18:28:08.748476564 +0000 UTC m=+0.135761095 container init 930de56cd6190e1295857f01c8eb7bedb69b6fe28476860a3f03a7a00c4017a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:28:08 np0005596060 podman[281275]: 2026-01-26 18:28:08.755744928 +0000 UTC m=+0.143029479 container start 930de56cd6190e1295857f01c8eb7bedb69b6fe28476860a3f03a7a00c4017a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:28:08 np0005596060 podman[281275]: 2026-01-26 18:28:08.759832472 +0000 UTC m=+0.147116983 container attach 930de56cd6190e1295857f01c8eb7bedb69b6fe28476860a3f03a7a00c4017a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 13:28:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:08.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:08.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:09 np0005596060 busy_wu[281291]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:28:09 np0005596060 busy_wu[281291]: --> relative data size: 1.0
Jan 26 13:28:09 np0005596060 busy_wu[281291]: --> All data devices are unavailable
Jan 26 13:28:09 np0005596060 systemd[1]: libpod-930de56cd6190e1295857f01c8eb7bedb69b6fe28476860a3f03a7a00c4017a8.scope: Deactivated successfully.
Jan 26 13:28:09 np0005596060 podman[281275]: 2026-01-26 18:28:09.535932856 +0000 UTC m=+0.923217367 container died 930de56cd6190e1295857f01c8eb7bedb69b6fe28476860a3f03a7a00c4017a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:28:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4e0bf6e83d0cf91e7502a405b7ce7f059821aecdddc846edd342aa341f885d7b-merged.mount: Deactivated successfully.
Jan 26 13:28:09 np0005596060 podman[281275]: 2026-01-26 18:28:09.593022498 +0000 UTC m=+0.980307009 container remove 930de56cd6190e1295857f01c8eb7bedb69b6fe28476860a3f03a7a00c4017a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:28:09 np0005596060 systemd[1]: libpod-conmon-930de56cd6190e1295857f01c8eb7bedb69b6fe28476860a3f03a7a00c4017a8.scope: Deactivated successfully.
Jan 26 13:28:09 np0005596060 nova_compute[247421]: 2026-01-26 18:28:09.672 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:28:10 np0005596060 podman[281456]: 2026-01-26 18:28:10.210521028 +0000 UTC m=+0.041743523 container create 785bbca7da603288f13be6fa9c2398657ac276afebab278dd555792c5e9b195f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 13:28:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:10 np0005596060 systemd[1]: Started libpod-conmon-785bbca7da603288f13be6fa9c2398657ac276afebab278dd555792c5e9b195f.scope.
Jan 26 13:28:10 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:28:10 np0005596060 podman[281456]: 2026-01-26 18:28:10.18938599 +0000 UTC m=+0.020608495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:28:10 np0005596060 podman[281456]: 2026-01-26 18:28:10.287537547 +0000 UTC m=+0.118760042 container init 785bbca7da603288f13be6fa9c2398657ac276afebab278dd555792c5e9b195f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:28:10 np0005596060 podman[281456]: 2026-01-26 18:28:10.29592047 +0000 UTC m=+0.127142975 container start 785bbca7da603288f13be6fa9c2398657ac276afebab278dd555792c5e9b195f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 13:28:10 np0005596060 podman[281456]: 2026-01-26 18:28:10.29984901 +0000 UTC m=+0.131071525 container attach 785bbca7da603288f13be6fa9c2398657ac276afebab278dd555792c5e9b195f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:28:10 np0005596060 zealous_hertz[281472]: 167 167
Jan 26 13:28:10 np0005596060 systemd[1]: libpod-785bbca7da603288f13be6fa9c2398657ac276afebab278dd555792c5e9b195f.scope: Deactivated successfully.
Jan 26 13:28:10 np0005596060 podman[281456]: 2026-01-26 18:28:10.301767119 +0000 UTC m=+0.132989604 container died 785bbca7da603288f13be6fa9c2398657ac276afebab278dd555792c5e9b195f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 13:28:10 np0005596060 systemd[1]: var-lib-containers-storage-overlay-65a9d4b78fa0c1cef67c6cfbf36fb908528c1fcaab488437711111ecec87c249-merged.mount: Deactivated successfully.
Jan 26 13:28:10 np0005596060 podman[281456]: 2026-01-26 18:28:10.341292614 +0000 UTC m=+0.172515099 container remove 785bbca7da603288f13be6fa9c2398657ac276afebab278dd555792c5e9b195f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:28:10 np0005596060 systemd[1]: libpod-conmon-785bbca7da603288f13be6fa9c2398657ac276afebab278dd555792c5e9b195f.scope: Deactivated successfully.
Jan 26 13:28:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 597 B/s wr, 25 op/s
Jan 26 13:28:10 np0005596060 podman[281495]: 2026-01-26 18:28:10.504320592 +0000 UTC m=+0.038741297 container create 793d0641d458014a5cbe5f7907c1ed50ee24856d0ad85b1481f564f8f184f3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:28:10 np0005596060 systemd[1]: Started libpod-conmon-793d0641d458014a5cbe5f7907c1ed50ee24856d0ad85b1481f564f8f184f3db.scope.
Jan 26 13:28:10 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:28:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc98428f63afc8bfbd31ea0e784cc5a10639c8b5772aa8b4df1e172d96aa102/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc98428f63afc8bfbd31ea0e784cc5a10639c8b5772aa8b4df1e172d96aa102/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc98428f63afc8bfbd31ea0e784cc5a10639c8b5772aa8b4df1e172d96aa102/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bc98428f63afc8bfbd31ea0e784cc5a10639c8b5772aa8b4df1e172d96aa102/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:10 np0005596060 podman[281495]: 2026-01-26 18:28:10.48815396 +0000 UTC m=+0.022574685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:28:10 np0005596060 podman[281495]: 2026-01-26 18:28:10.598029156 +0000 UTC m=+0.132449881 container init 793d0641d458014a5cbe5f7907c1ed50ee24856d0ad85b1481f564f8f184f3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 13:28:10 np0005596060 podman[281495]: 2026-01-26 18:28:10.607901017 +0000 UTC m=+0.142321722 container start 793d0641d458014a5cbe5f7907c1ed50ee24856d0ad85b1481f564f8f184f3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:28:10 np0005596060 podman[281495]: 2026-01-26 18:28:10.611368515 +0000 UTC m=+0.145789250 container attach 793d0641d458014a5cbe5f7907c1ed50ee24856d0ad85b1481f564f8f184f3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:28:10 np0005596060 nova_compute[247421]: 2026-01-26 18:28:10.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:28:10 np0005596060 nova_compute[247421]: 2026-01-26 18:28:10.674 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:28:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:10.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:10.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:11 np0005596060 sad_germain[281511]: {
Jan 26 13:28:11 np0005596060 sad_germain[281511]:    "1": [
Jan 26 13:28:11 np0005596060 sad_germain[281511]:        {
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "devices": [
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "/dev/loop3"
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            ],
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "lv_name": "ceph_lv0",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "lv_size": "7511998464",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "name": "ceph_lv0",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "tags": {
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.cluster_name": "ceph",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.crush_device_class": "",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.encrypted": "0",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.osd_id": "1",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.type": "block",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:                "ceph.vdo": "0"
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            },
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "type": "block",
Jan 26 13:28:11 np0005596060 sad_germain[281511]:            "vg_name": "ceph_vg0"
Jan 26 13:28:11 np0005596060 sad_germain[281511]:        }
Jan 26 13:28:11 np0005596060 sad_germain[281511]:    ]
Jan 26 13:28:11 np0005596060 sad_germain[281511]: }
Jan 26 13:28:11 np0005596060 systemd[1]: libpod-793d0641d458014a5cbe5f7907c1ed50ee24856d0ad85b1481f564f8f184f3db.scope: Deactivated successfully.
Jan 26 13:28:11 np0005596060 podman[281495]: 2026-01-26 18:28:11.432686049 +0000 UTC m=+0.967106774 container died 793d0641d458014a5cbe5f7907c1ed50ee24856d0ad85b1481f564f8f184f3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:28:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2bc98428f63afc8bfbd31ea0e784cc5a10639c8b5772aa8b4df1e172d96aa102-merged.mount: Deactivated successfully.
Jan 26 13:28:11 np0005596060 podman[281495]: 2026-01-26 18:28:11.500697079 +0000 UTC m=+1.035117784 container remove 793d0641d458014a5cbe5f7907c1ed50ee24856d0ad85b1481f564f8f184f3db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_germain, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 13:28:11 np0005596060 systemd[1]: libpod-conmon-793d0641d458014a5cbe5f7907c1ed50ee24856d0ad85b1481f564f8f184f3db.scope: Deactivated successfully.
Jan 26 13:28:11 np0005596060 nova_compute[247421]: 2026-01-26 18:28:11.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:28:11 np0005596060 nova_compute[247421]: 2026-01-26 18:28:11.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:28:11 np0005596060 nova_compute[247421]: 2026-01-26 18:28:11.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:28:11 np0005596060 nova_compute[247421]: 2026-01-26 18:28:11.770 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:12 np0005596060 podman[281672]: 2026-01-26 18:28:12.154436429 +0000 UTC m=+0.040355288 container create 057fe7cac6642502f67c18aba97c273249ea596f3da25be40e0662e6eda1b134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:28:12 np0005596060 systemd[1]: Started libpod-conmon-057fe7cac6642502f67c18aba97c273249ea596f3da25be40e0662e6eda1b134.scope.
Jan 26 13:28:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:28:12 np0005596060 podman[281672]: 2026-01-26 18:28:12.137004216 +0000 UTC m=+0.022923125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:28:12 np0005596060 podman[281672]: 2026-01-26 18:28:12.238155149 +0000 UTC m=+0.124074068 container init 057fe7cac6642502f67c18aba97c273249ea596f3da25be40e0662e6eda1b134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 13:28:12 np0005596060 podman[281672]: 2026-01-26 18:28:12.245714031 +0000 UTC m=+0.131632900 container start 057fe7cac6642502f67c18aba97c273249ea596f3da25be40e0662e6eda1b134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 26 13:28:12 np0005596060 unruffled_leakey[281689]: 167 167
Jan 26 13:28:12 np0005596060 systemd[1]: libpod-057fe7cac6642502f67c18aba97c273249ea596f3da25be40e0662e6eda1b134.scope: Deactivated successfully.
Jan 26 13:28:12 np0005596060 conmon[281689]: conmon 057fe7cac6642502f67c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-057fe7cac6642502f67c18aba97c273249ea596f3da25be40e0662e6eda1b134.scope/container/memory.events
Jan 26 13:28:12 np0005596060 podman[281672]: 2026-01-26 18:28:12.250819061 +0000 UTC m=+0.136738020 container attach 057fe7cac6642502f67c18aba97c273249ea596f3da25be40e0662e6eda1b134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:28:12 np0005596060 podman[281672]: 2026-01-26 18:28:12.251341224 +0000 UTC m=+0.137260103 container died 057fe7cac6642502f67c18aba97c273249ea596f3da25be40e0662e6eda1b134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 13:28:12 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ee1508291c52145802d517f215a27c8fdc5a418ecc7f74c7ecbc8eb6467c8e43-merged.mount: Deactivated successfully.
Jan 26 13:28:12 np0005596060 podman[281672]: 2026-01-26 18:28:12.294225875 +0000 UTC m=+0.180144744 container remove 057fe7cac6642502f67c18aba97c273249ea596f3da25be40e0662e6eda1b134 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:28:12 np0005596060 systemd[1]: libpod-conmon-057fe7cac6642502f67c18aba97c273249ea596f3da25be40e0662e6eda1b134.scope: Deactivated successfully.
Jan 26 13:28:12 np0005596060 podman[281714]: 2026-01-26 18:28:12.444389615 +0000 UTC m=+0.044035601 container create 7006b2ea65181346bb2243839192a06a3af04c1f3b931d3a1473780ee457326c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 13:28:12 np0005596060 systemd[1]: Started libpod-conmon-7006b2ea65181346bb2243839192a06a3af04c1f3b931d3a1473780ee457326c.scope.
Jan 26 13:28:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 180 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.5 MiB/s wr, 86 op/s
Jan 26 13:28:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:28:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba226fd005e830752f70c30221a99762c87624cab63d1d74f064d6f2646aec6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba226fd005e830752f70c30221a99762c87624cab63d1d74f064d6f2646aec6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba226fd005e830752f70c30221a99762c87624cab63d1d74f064d6f2646aec6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba226fd005e830752f70c30221a99762c87624cab63d1d74f064d6f2646aec6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:28:12 np0005596060 podman[281714]: 2026-01-26 18:28:12.424604402 +0000 UTC m=+0.024250438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:28:12 np0005596060 podman[281714]: 2026-01-26 18:28:12.527342726 +0000 UTC m=+0.126988732 container init 7006b2ea65181346bb2243839192a06a3af04c1f3b931d3a1473780ee457326c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hugle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:28:12 np0005596060 podman[281714]: 2026-01-26 18:28:12.532926068 +0000 UTC m=+0.132572054 container start 7006b2ea65181346bb2243839192a06a3af04c1f3b931d3a1473780ee457326c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hugle, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 13:28:12 np0005596060 podman[281714]: 2026-01-26 18:28:12.536447947 +0000 UTC m=+0.136093933 container attach 7006b2ea65181346bb2243839192a06a3af04c1f3b931d3a1473780ee457326c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hugle, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:28:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:12.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:12.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:13 np0005596060 charming_hugle[281730]: {
Jan 26 13:28:13 np0005596060 charming_hugle[281730]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:28:13 np0005596060 charming_hugle[281730]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:28:13 np0005596060 charming_hugle[281730]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:28:13 np0005596060 charming_hugle[281730]:        "osd_id": 1,
Jan 26 13:28:13 np0005596060 charming_hugle[281730]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:28:13 np0005596060 charming_hugle[281730]:        "type": "bluestore"
Jan 26 13:28:13 np0005596060 charming_hugle[281730]:    }
Jan 26 13:28:13 np0005596060 charming_hugle[281730]: }
Jan 26 13:28:13 np0005596060 systemd[1]: libpod-7006b2ea65181346bb2243839192a06a3af04c1f3b931d3a1473780ee457326c.scope: Deactivated successfully.
Jan 26 13:28:13 np0005596060 podman[281714]: 2026-01-26 18:28:13.423558305 +0000 UTC m=+1.023204291 container died 7006b2ea65181346bb2243839192a06a3af04c1f3b931d3a1473780ee457326c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hugle, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 13:28:13 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7ba226fd005e830752f70c30221a99762c87624cab63d1d74f064d6f2646aec6-merged.mount: Deactivated successfully.
Jan 26 13:28:13 np0005596060 nova_compute[247421]: 2026-01-26 18:28:13.469 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:13 np0005596060 podman[281714]: 2026-01-26 18:28:13.484991838 +0000 UTC m=+1.084637824 container remove 7006b2ea65181346bb2243839192a06a3af04c1f3b931d3a1473780ee457326c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:28:13 np0005596060 systemd[1]: libpod-conmon-7006b2ea65181346bb2243839192a06a3af04c1f3b931d3a1473780ee457326c.scope: Deactivated successfully.
Jan 26 13:28:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:28:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:28:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:28:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:28:13 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3dec1807-903e-46dd-b193-a0f7226e0e34 does not exist
Jan 26 13:28:13 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev bc8875f8-22d1-4d01-9a02-27a5cd3a432c does not exist
Jan 26 13:28:13 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev eb5a4d50-e46f-4502-8527-4809084a5c42 does not exist
Jan 26 13:28:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:28:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:28:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:28:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:28:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:28:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:28:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 180 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 75 op/s
Jan 26 13:28:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:28:14 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:28:14 np0005596060 nova_compute[247421]: 2026-01-26 18:28:14.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:28:14 np0005596060 nova_compute[247421]: 2026-01-26 18:28:14.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:28:14 np0005596060 nova_compute[247421]: 2026-01-26 18:28:14.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:28:14 np0005596060 nova_compute[247421]: 2026-01-26 18:28:14.666 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:28:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:28:14.756 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:28:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:28:14.757 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:28:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:28:14.757 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:28:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:14.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:14.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:28:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2876238010' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:28:15 np0005596060 nova_compute[247421]: 2026-01-26 18:28:15.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:28:15 np0005596060 nova_compute[247421]: 2026-01-26 18:28:15.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:28:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 172 MiB data, 318 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 77 op/s
Jan 26 13:28:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Jan 26 13:28:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Jan 26 13:28:16 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Jan 26 13:28:16 np0005596060 nova_compute[247421]: 2026-01-26 18:28:16.773 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:16.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:16.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Jan 26 13:28:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Jan 26 13:28:17 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Jan 26 13:28:17 np0005596060 nova_compute[247421]: 2026-01-26 18:28:17.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:28:17 np0005596060 nova_compute[247421]: 2026-01-26 18:28:17.671 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:28:17 np0005596060 nova_compute[247421]: 2026-01-26 18:28:17.671 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:28:17 np0005596060 nova_compute[247421]: 2026-01-26 18:28:17.671 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:28:17 np0005596060 nova_compute[247421]: 2026-01-26 18:28:17.672 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:28:17 np0005596060 nova_compute[247421]: 2026-01-26 18:28:17.672 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:28:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:28:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2280896429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.106 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.262 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.263 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4806MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.263 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.263 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.318 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.319 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.334 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:28:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 134 MiB data, 304 MiB used, 21 GiB / 21 GiB avail; 73 KiB/s rd, 5.3 MiB/s wr, 112 op/s
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.520 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Jan 26 13:28:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Jan 26 13:28:18 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Jan 26 13:28:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:28:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1639500043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.824 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:28:18 np0005596060 nova_compute[247421]: 2026-01-26 18:28:18.828 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:28:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:18.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:18.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:19 np0005596060 nova_compute[247421]: 2026-01-26 18:28:19.297 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:28:19 np0005596060 nova_compute[247421]: 2026-01-26 18:28:19.299 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:28:19 np0005596060 nova_compute[247421]: 2026-01-26 18:28:19.299 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:28:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Jan 26 13:28:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Jan 26 13:28:19 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Jan 26 13:28:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 134 MiB data, 304 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 13:28:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:20.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:28:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1876270920' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:28:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:28:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1876270920' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:28:21 np0005596060 nova_compute[247421]: 2026-01-26 18:28:21.777 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 105 op/s
Jan 26 13:28:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:22.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:22.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:23 np0005596060 nova_compute[247421]: 2026-01-26 18:28:23.301 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:28:23 np0005596060 nova_compute[247421]: 2026-01-26 18:28:23.522 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Jan 26 13:28:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Jan 26 13:28:23 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Jan 26 13:28:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 134 MiB data, 297 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 79 op/s
Jan 26 13:28:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:24.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:24.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Jan 26 13:28:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Jan 26 13:28:25 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Jan 26 13:28:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 148 MiB data, 303 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.0 MiB/s wr, 74 op/s
Jan 26 13:28:26 np0005596060 nova_compute[247421]: 2026-01-26 18:28:26.777 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:26.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:26.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 180 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.3 MiB/s wr, 112 op/s
Jan 26 13:28:28 np0005596060 nova_compute[247421]: 2026-01-26 18:28:28.524 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:28.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:28.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 180 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 52 op/s
Jan 26 13:28:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:30.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:30.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:31 np0005596060 nova_compute[247421]: 2026-01-26 18:28:31.779 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 180 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.5 MiB/s wr, 54 op/s
Jan 26 13:28:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Jan 26 13:28:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Jan 26 13:28:32 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Jan 26 13:28:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:32.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:32.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:33 np0005596060 nova_compute[247421]: 2026-01-26 18:28:33.526 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 180 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 50 op/s
Jan 26 13:28:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Jan 26 13:28:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Jan 26 13:28:34 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Jan 26 13:28:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:34.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:34.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:28:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1089858877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:28:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:28:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1089858877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:28:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 150 MiB data, 308 MiB used, 21 GiB / 21 GiB avail; 9.2 KiB/s rd, 1.7 KiB/s wr, 16 op/s
Jan 26 13:28:36 np0005596060 nova_compute[247421]: 2026-01-26 18:28:36.819 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:36.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:36.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 99 KiB/s rd, 6.6 KiB/s wr, 139 op/s
Jan 26 13:28:38 np0005596060 nova_compute[247421]: 2026-01-26 18:28:38.527 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:38 np0005596060 podman[281972]: 2026-01-26 18:28:38.800262453 +0000 UTC m=+0.058229792 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 13:28:38 np0005596060 podman[281973]: 2026-01-26 18:28:38.827844145 +0000 UTC m=+0.085790713 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:28:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:38.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:38.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:39 np0005596060 nova_compute[247421]: 2026-01-26 18:28:39.480 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:28:39.481 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:28:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:28:39.482 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:28:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Jan 26 13:28:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Jan 26 13:28:40 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 26 13:28:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:28:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043322277' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:28:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:28:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1043322277' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:28:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 41 MiB data, 266 MiB used, 21 GiB / 21 GiB avail; 96 KiB/s rd, 6.2 KiB/s wr, 135 op/s
Jan 26 13:28:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:40.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:40.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:41 np0005596060 nova_compute[247421]: 2026-01-26 18:28:41.821 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 95 KiB/s rd, 6.1 KiB/s wr, 133 op/s
Jan 26 13:28:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:42.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:42.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:43 np0005596060 nova_compute[247421]: 2026-01-26 18:28:43.560 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:28:44
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'backups', '.rgw.root', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta']
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 78 KiB/s rd, 5.0 KiB/s wr, 109 op/s
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:28:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:28:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:44.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:44.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:46 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:28:46.484 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:28:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 3.9 KiB/s wr, 97 op/s
Jan 26 13:28:46 np0005596060 nova_compute[247421]: 2026-01-26 18:28:46.824 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:46.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:47.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:28:48 np0005596060 nova_compute[247421]: 2026-01-26 18:28:48.571 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:48.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:28:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:49.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:28:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:28:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:50.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:51.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:51 np0005596060 nova_compute[247421]: 2026-01-26 18:28:51.827 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:28:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:52.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:28:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:53.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:28:53 np0005596060 nova_compute[247421]: 2026-01-26 18:28:53.573 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:28:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:28:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8262 writes, 35K keys, 8258 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8262 writes, 8258 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1569 writes, 6421 keys, 1569 commit groups, 1.0 writes per commit group, ingest: 10.43 MB, 0.02 MB/s#012Interval WAL: 1569 writes, 1569 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.8      3.08              0.18        20    0.154       0      0       0.0       0.0#012  L6      1/0    8.65 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.5     56.4     45.8      3.49              0.54        19    0.184     99K    10K       0.0       0.0#012 Sum      1/0    8.65 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.5     29.9     31.2      6.57              0.72        39    0.169     99K    10K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9    119.4    118.2      0.29              0.12         6    0.048     19K   2051       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0     56.4     45.8      3.49              0.54        19    0.184     99K    10K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.8      3.08              0.18        19    0.162       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.044, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.07 MB/s write, 0.19 GB read, 0.07 MB/s read, 6.6 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5652937211f0#2 capacity: 304.00 MB usage: 23.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000287 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1356,22.63 MB,7.44374%) FilterBlock(40,283.36 KB,0.0910257%) IndexBlock(40,503.73 KB,0.161818%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 26 13:28:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:54.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:55.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:28:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:28:56 np0005596060 nova_compute[247421]: 2026-01-26 18:28:56.830 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:56.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:28:58 np0005596060 nova_compute[247421]: 2026-01-26 18:28:58.575 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:28:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:28:58.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:28:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:28:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:28:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:28:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:29:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:00.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:01.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:01 np0005596060 nova_compute[247421]: 2026-01-26 18:29:01.833 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 597 B/s wr, 1 op/s
Jan 26 13:29:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:02.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:03.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:03 np0005596060 nova_compute[247421]: 2026-01-26 18:29:03.577 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:29:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:29:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 597 B/s wr, 1 op/s
Jan 26 13:29:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:04.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:05.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 41 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 1.1 KiB/s rd, 938 B/s wr, 2 op/s
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.545052) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452146545119, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2191, "num_deletes": 255, "total_data_size": 3851849, "memory_usage": 3905072, "flush_reason": "Manual Compaction"}
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452146568119, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3781789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34037, "largest_seqno": 36227, "table_properties": {"data_size": 3771916, "index_size": 6303, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20436, "raw_average_key_size": 20, "raw_value_size": 3752070, "raw_average_value_size": 3789, "num_data_blocks": 275, "num_entries": 990, "num_filter_entries": 990, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769451936, "oldest_key_time": 1769451936, "file_creation_time": 1769452146, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 23138 microseconds, and 9433 cpu microseconds.
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.568166) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3781789 bytes OK
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.568213) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.570408) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.570421) EVENT_LOG_v1 {"time_micros": 1769452146570417, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.570438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3842965, prev total WAL file size 3842965, number of live WAL files 2.
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.571798) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3693KB)], [74(8862KB)]
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452146571920, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12856631, "oldest_snapshot_seqno": -1}
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6264 keys, 10868978 bytes, temperature: kUnknown
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452146642614, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10868978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10826081, "index_size": 26147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 159694, "raw_average_key_size": 25, "raw_value_size": 10712290, "raw_average_value_size": 1710, "num_data_blocks": 1054, "num_entries": 6264, "num_filter_entries": 6264, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769452146, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.642836) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10868978 bytes
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.644531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.7 rd, 153.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.7 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 6789, records dropped: 525 output_compression: NoCompression
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.644546) EVENT_LOG_v1 {"time_micros": 1769452146644538, "job": 42, "event": "compaction_finished", "compaction_time_micros": 70754, "compaction_time_cpu_micros": 25665, "output_level": 6, "num_output_files": 1, "total_output_size": 10868978, "num_input_records": 6789, "num_output_records": 6264, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452146645313, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452146647075, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.571631) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.647101) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.647105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.647106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.647107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:06.647109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:06 np0005596060 nova_compute[247421]: 2026-01-26 18:29:06.835 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:06.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:07.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:07 np0005596060 nova_compute[247421]: 2026-01-26 18:29:07.568 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Acquiring lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:07 np0005596060 nova_compute[247421]: 2026-01-26 18:29:07.568 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:07 np0005596060 nova_compute[247421]: 2026-01-26 18:29:07.595 247428 DEBUG nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:29:07 np0005596060 nova_compute[247421]: 2026-01-26 18:29:07.685 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:07 np0005596060 nova_compute[247421]: 2026-01-26 18:29:07.686 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:07 np0005596060 nova_compute[247421]: 2026-01-26 18:29:07.693 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:29:07 np0005596060 nova_compute[247421]: 2026-01-26 18:29:07.693 247428 INFO nova.compute.claims [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:29:07 np0005596060 nova_compute[247421]: 2026-01-26 18:29:07.959 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:29:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1014871038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.410 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.415 247428 DEBUG nova.compute.provider_tree [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.432 247428 DEBUG nova.scheduler.client.report [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.459 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.460 247428 DEBUG nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.509 247428 DEBUG nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.510 247428 DEBUG nova.network.neutron [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.536 247428 INFO nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:29:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 58 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 441 KiB/s wr, 6 op/s
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.555 247428 DEBUG nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.629 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.674 247428 DEBUG nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.675 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.675 247428 INFO nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Creating image(s)#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.707 247428 DEBUG nova.storage.rbd_utils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] rbd image ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.737 247428 DEBUG nova.storage.rbd_utils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] rbd image ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.766 247428 DEBUG nova.storage.rbd_utils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] rbd image ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.770 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.838 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.840 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.840 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.841 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.866 247428 DEBUG nova.storage.rbd_utils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] rbd image ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:29:08 np0005596060 nova_compute[247421]: 2026-01-26 18:29:08.869 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:08.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:09.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:29:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3172948562' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:29:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:29:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3172948562' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:29:09 np0005596060 podman[282195]: 2026-01-26 18:29:09.78998368 +0000 UTC m=+0.054271822 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 13:29:09 np0005596060 podman[282196]: 2026-01-26 18:29:09.840012083 +0000 UTC m=+0.096068215 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 26 13:29:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:10 np0005596060 nova_compute[247421]: 2026-01-26 18:29:10.422 247428 DEBUG nova.network.neutron [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Successfully created port: 576a36c0-4aed-492a-b678-83c1eaef931b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:29:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 58 MiB data, 259 MiB used, 21 GiB / 21 GiB avail; 3.3 KiB/s rd, 441 KiB/s wr, 6 op/s
Jan 26 13:29:10 np0005596060 nova_compute[247421]: 2026-01-26 18:29:10.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:29:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:10.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:11.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:11 np0005596060 nova_compute[247421]: 2026-01-26 18:29:11.283 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:11 np0005596060 nova_compute[247421]: 2026-01-26 18:29:11.360 247428 DEBUG nova.storage.rbd_utils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] resizing rbd image ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:29:11 np0005596060 nova_compute[247421]: 2026-01-26 18:29:11.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:29:11 np0005596060 nova_compute[247421]: 2026-01-26 18:29:11.836 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:12 np0005596060 nova_compute[247421]: 2026-01-26 18:29:12.253 247428 DEBUG nova.objects.instance [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lazy-loading 'migration_context' on Instance uuid ede36747-ccc3-4077-b6f0-a5a6663f4cd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:29:12 np0005596060 nova_compute[247421]: 2026-01-26 18:29:12.278 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:29:12 np0005596060 nova_compute[247421]: 2026-01-26 18:29:12.279 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Ensure instance console log exists: /var/lib/nova/instances/ede36747-ccc3-4077-b6f0-a5a6663f4cd7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:29:12 np0005596060 nova_compute[247421]: 2026-01-26 18:29:12.279 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:12 np0005596060 nova_compute[247421]: 2026-01-26 18:29:12.280 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:12 np0005596060 nova_compute[247421]: 2026-01-26 18:29:12.280 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 50 KiB/s rd, 3.5 MiB/s wr, 76 op/s
Jan 26 13:29:12 np0005596060 nova_compute[247421]: 2026-01-26 18:29:12.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:29:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:12.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:13.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:29:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1154636018' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:29:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:29:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1154636018' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.366 247428 DEBUG nova.network.neutron [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Successfully updated port: 576a36c0-4aed-492a-b678-83c1eaef931b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.388 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Acquiring lock "refresh_cache-ede36747-ccc3-4077-b6f0-a5a6663f4cd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.388 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Acquired lock "refresh_cache-ede36747-ccc3-4077-b6f0-a5a6663f4cd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.388 247428 DEBUG nova.network.neutron [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.544 247428 DEBUG nova.compute.manager [req-21cd7ca7-d86d-4c58-808c-f6364dd1a156 req-04049a08-4c71-48ca-8a98-eac5a73db25c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Received event network-changed-576a36c0-4aed-492a-b678-83c1eaef931b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.545 247428 DEBUG nova.compute.manager [req-21cd7ca7-d86d-4c58-808c-f6364dd1a156 req-04049a08-4c71-48ca-8a98-eac5a73db25c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Refreshing instance network info cache due to event network-changed-576a36c0-4aed-492a-b678-83c1eaef931b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.545 247428 DEBUG oslo_concurrency.lockutils [req-21cd7ca7-d86d-4c58-808c-f6364dd1a156 req-04049a08-4c71-48ca-8a98-eac5a73db25c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-ede36747-ccc3-4077-b6f0-a5a6663f4cd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.630 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:29:13 np0005596060 nova_compute[247421]: 2026-01-26 18:29:13.690 247428 DEBUG nova.network.neutron [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:29:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:29:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:29:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:29:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:29:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:29:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:29:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 49 KiB/s rd, 3.5 MiB/s wr, 74 op/s
Jan 26 13:29:14 np0005596060 podman[282485]: 2026-01-26 18:29:14.606500049 +0000 UTC m=+0.068237587 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.667 247428 DEBUG nova.network.neutron [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Updating instance_info_cache with network_info: [{"id": "576a36c0-4aed-492a-b678-83c1eaef931b", "address": "fa:16:3e:53:c8:1e", "network": {"id": "3f70dd9e-997c-43d9-abf7-8ac842dc7a2a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1075445344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dd033a95e4c454f82b471fb31b8c978", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap576a36c0-4a", "ovs_interfaceid": "576a36c0-4aed-492a-b678-83c1eaef931b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.696 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Releasing lock "refresh_cache-ede36747-ccc3-4077-b6f0-a5a6663f4cd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.697 247428 DEBUG nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Instance network_info: |[{"id": "576a36c0-4aed-492a-b678-83c1eaef931b", "address": "fa:16:3e:53:c8:1e", "network": {"id": "3f70dd9e-997c-43d9-abf7-8ac842dc7a2a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1075445344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dd033a95e4c454f82b471fb31b8c978", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap576a36c0-4a", "ovs_interfaceid": "576a36c0-4aed-492a-b678-83c1eaef931b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.697 247428 DEBUG oslo_concurrency.lockutils [req-21cd7ca7-d86d-4c58-808c-f6364dd1a156 req-04049a08-4c71-48ca-8a98-eac5a73db25c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-ede36747-ccc3-4077-b6f0-a5a6663f4cd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.697 247428 DEBUG nova.network.neutron [req-21cd7ca7-d86d-4c58-808c-f6364dd1a156 req-04049a08-4c71-48ca-8a98-eac5a73db25c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Refreshing network info cache for port 576a36c0-4aed-492a-b678-83c1eaef931b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.699 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Start _get_guest_xml network_info=[{"id": "576a36c0-4aed-492a-b678-83c1eaef931b", "address": "fa:16:3e:53:c8:1e", "network": {"id": "3f70dd9e-997c-43d9-abf7-8ac842dc7a2a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1075445344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dd033a95e4c454f82b471fb31b8c978", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap576a36c0-4a", "ovs_interfaceid": "576a36c0-4aed-492a-b678-83c1eaef931b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.704 247428 WARNING nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.709 247428 DEBUG nova.virt.libvirt.host [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.710 247428 DEBUG nova.virt.libvirt.host [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.713 247428 DEBUG nova.virt.libvirt.host [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.713 247428 DEBUG nova.virt.libvirt.host [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.714 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.715 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.715 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.715 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.715 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.716 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.716 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.716 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.716 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.717 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.717 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.717 247428 DEBUG nova.virt.hardware [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:29:14 np0005596060 nova_compute[247421]: 2026-01-26 18:29:14.719 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:14 np0005596060 podman[282485]: 2026-01-26 18:29:14.720781426 +0000 UTC m=+0.182518954 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:29:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:14.757 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:14.757 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:14.757 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:14.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:15.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:29:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/491516259' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.170 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.195 247428 DEBUG nova.storage.rbd_utils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] rbd image ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.200 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:29:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:15 np0005596060 podman[282681]: 2026-01-26 18:29:15.33335287 +0000 UTC m=+0.057672148 container exec e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 13:29:15 np0005596060 podman[282681]: 2026-01-26 18:29:15.37069549 +0000 UTC m=+0.095014768 container exec_died e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 13:29:15 np0005596060 podman[282765]: 2026-01-26 18:29:15.575995993 +0000 UTC m=+0.051336017 container exec 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, build-date=2023-02-22T09:23:20, vcs-type=git, name=keepalived, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Jan 26 13:29:15 np0005596060 podman[282765]: 2026-01-26 18:29:15.585505245 +0000 UTC m=+0.060845269 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Jan 26 13:29:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:29:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371539234' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:29:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.650 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.652 247428 DEBUG nova.virt.libvirt.vif [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:29:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1610684880',display_name='tempest-TestServerMultinode-server-1610684880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-1610684880',id=18,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f2b1e48060904db7a7d629fffdaa921a',ramdisk_id='',reservation_id='r-asirxge5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-128980879',owner_user_name='tempest-TestServerMultinode-128980879-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:29:08Z,user_data=None,user_id='87b6f2cd2d124de2be281e270184d195',uuid=ede36747-ccc3-4077-b6f0-a5a6663f4cd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "576a36c0-4aed-492a-b678-83c1eaef931b", "address": "fa:16:3e:53:c8:1e", "network": {"id": "3f70dd9e-997c-43d9-abf7-8ac842dc7a2a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1075445344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dd033a95e4c454f82b471fb31b8c978", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap576a36c0-4a", "ovs_interfaceid": "576a36c0-4aed-492a-b678-83c1eaef931b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.652 247428 DEBUG nova.network.os_vif_util [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Converting VIF {"id": "576a36c0-4aed-492a-b678-83c1eaef931b", "address": "fa:16:3e:53:c8:1e", "network": {"id": "3f70dd9e-997c-43d9-abf7-8ac842dc7a2a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1075445344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dd033a95e4c454f82b471fb31b8c978", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap576a36c0-4a", "ovs_interfaceid": "576a36c0-4aed-492a-b678-83c1eaef931b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.653 247428 DEBUG nova.network.os_vif_util [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c8:1e,bridge_name='br-int',has_traffic_filtering=True,id=576a36c0-4aed-492a-b678-83c1eaef931b,network=Network(3f70dd9e-997c-43d9-abf7-8ac842dc7a2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap576a36c0-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.654 247428 DEBUG nova.objects.instance [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lazy-loading 'pci_devices' on Instance uuid ede36747-ccc3-4077-b6f0-a5a6663f4cd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.675 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <uuid>ede36747-ccc3-4077-b6f0-a5a6663f4cd7</uuid>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <name>instance-00000012</name>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestServerMultinode-server-1610684880</nova:name>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:29:14</nova:creationTime>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <nova:user uuid="87b6f2cd2d124de2be281e270184d195">tempest-TestServerMultinode-128980879-project-admin</nova:user>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <nova:project uuid="f2b1e48060904db7a7d629fffdaa921a">tempest-TestServerMultinode-128980879</nova:project>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <nova:port uuid="576a36c0-4aed-492a-b678-83c1eaef931b">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <entry name="serial">ede36747-ccc3-4077-b6f0-a5a6663f4cd7</entry>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <entry name="uuid">ede36747-ccc3-4077-b6f0-a5a6663f4cd7</entry>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk.config">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:53:c8:1e"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <target dev="tap576a36c0-4a"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/ede36747-ccc3-4077-b6f0-a5a6663f4cd7/console.log" append="off"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:29:15 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:29:15 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:29:15 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:29:15 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.676 247428 DEBUG nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Preparing to wait for external event network-vif-plugged-576a36c0-4aed-492a-b678-83c1eaef931b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.676 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Acquiring lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.676 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.676 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.677 247428 DEBUG nova.virt.libvirt.vif [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:29:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1610684880',display_name='tempest-TestServerMultinode-server-1610684880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-1610684880',id=18,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f2b1e48060904db7a7d629fffdaa921a',ramdisk_id='',reservation_id='r-asirxge5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-128980879',owner_user_name='tempest-TestServerMultinode-128980879-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:29:08Z,user_data=None,user_id='87b6f2cd2d124de2be281e270184d195',uuid=ede36747-ccc3-4077-b6f0-a5a6663f4cd7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "576a36c0-4aed-492a-b678-83c1eaef931b", "address": "fa:16:3e:53:c8:1e", "network": {"id": "3f70dd9e-997c-43d9-abf7-8ac842dc7a2a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1075445344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dd033a95e4c454f82b471fb31b8c978", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap576a36c0-4a", "ovs_interfaceid": "576a36c0-4aed-492a-b678-83c1eaef931b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.677 247428 DEBUG nova.network.os_vif_util [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Converting VIF {"id": "576a36c0-4aed-492a-b678-83c1eaef931b", "address": "fa:16:3e:53:c8:1e", "network": {"id": "3f70dd9e-997c-43d9-abf7-8ac842dc7a2a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1075445344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dd033a95e4c454f82b471fb31b8c978", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap576a36c0-4a", "ovs_interfaceid": "576a36c0-4aed-492a-b678-83c1eaef931b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.678 247428 DEBUG nova.network.os_vif_util [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c8:1e,bridge_name='br-int',has_traffic_filtering=True,id=576a36c0-4aed-492a-b678-83c1eaef931b,network=Network(3f70dd9e-997c-43d9-abf7-8ac842dc7a2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap576a36c0-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.678 247428 DEBUG os_vif [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c8:1e,bridge_name='br-int',has_traffic_filtering=True,id=576a36c0-4aed-492a-b678-83c1eaef931b,network=Network(3f70dd9e-997c-43d9-abf7-8ac842dc7a2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap576a36c0-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.679 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.679 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.680 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.684 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.684 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap576a36c0-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.684 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap576a36c0-4a, col_values=(('external_ids', {'iface-id': '576a36c0-4aed-492a-b678-83c1eaef931b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:c8:1e', 'vm-uuid': 'ede36747-ccc3-4077-b6f0-a5a6663f4cd7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.742 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:15 np0005596060 NetworkManager[48900]: <info>  [1769452155.7437] manager: (tap576a36c0-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.747 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.750 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.751 247428 INFO os_vif [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c8:1e,bridge_name='br-int',has_traffic_filtering=True,id=576a36c0-4aed-492a-b678-83c1eaef931b,network=Network(3f70dd9e-997c-43d9-abf7-8ac842dc7a2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap576a36c0-4a')#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.808 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.808 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.809 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] No VIF found with MAC fa:16:3e:53:c8:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.809 247428 INFO nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Using config drive#033[00m
Jan 26 13:29:15 np0005596060 nova_compute[247421]: 2026-01-26 18:29:15.832 247428 DEBUG nova.storage.rbd_utils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] rbd image ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.222 247428 DEBUG nova.network.neutron [req-21cd7ca7-d86d-4c58-808c-f6364dd1a156 req-04049a08-4c71-48ca-8a98-eac5a73db25c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Updated VIF entry in instance network info cache for port 576a36c0-4aed-492a-b678-83c1eaef931b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.223 247428 DEBUG nova.network.neutron [req-21cd7ca7-d86d-4c58-808c-f6364dd1a156 req-04049a08-4c71-48ca-8a98-eac5a73db25c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Updating instance_info_cache with network_info: [{"id": "576a36c0-4aed-492a-b678-83c1eaef931b", "address": "fa:16:3e:53:c8:1e", "network": {"id": "3f70dd9e-997c-43d9-abf7-8ac842dc7a2a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1075445344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dd033a95e4c454f82b471fb31b8c978", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap576a36c0-4a", "ovs_interfaceid": "576a36c0-4aed-492a-b678-83c1eaef931b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.249 247428 DEBUG oslo_concurrency.lockutils [req-21cd7ca7-d86d-4c58-808c-f6364dd1a156 req-04049a08-4c71-48ca-8a98-eac5a73db25c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-ede36747-ccc3-4077-b6f0-a5a6663f4cd7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.468 247428 INFO nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Creating config drive at /var/lib/nova/instances/ede36747-ccc3-4077-b6f0-a5a6663f4cd7/disk.config#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.474 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ede36747-ccc3-4077-b6f0-a5a6663f4cd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmputtblunc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 3.5 MiB/s wr, 78 op/s
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.604 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ede36747-ccc3-4077-b6f0-a5a6663f4cd7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmputtblunc" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.633 247428 DEBUG nova.storage.rbd_utils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] rbd image ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.637 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ede36747-ccc3-4077-b6f0-a5a6663f4cd7/disk.config ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.661 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.661 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.662 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.662 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.680 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.680 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:29:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:29:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.838 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.955 247428 DEBUG oslo_concurrency.processutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ede36747-ccc3-4077-b6f0-a5a6663f4cd7/disk.config ede36747-ccc3-4077-b6f0-a5a6663f4cd7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:16 np0005596060 nova_compute[247421]: 2026-01-26 18:29:16.957 247428 INFO nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Deleting local config drive /var/lib/nova/instances/ede36747-ccc3-4077-b6f0-a5a6663f4cd7/disk.config because it was imported into RBD.#033[00m
Jan 26 13:29:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:16.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:17 np0005596060 kernel: tap576a36c0-4a: entered promiscuous mode
Jan 26 13:29:17 np0005596060 NetworkManager[48900]: <info>  [1769452157.0336] manager: (tap576a36c0-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Jan 26 13:29:17 np0005596060 ovn_controller[148842]: 2026-01-26T18:29:17Z|00100|binding|INFO|Claiming lport 576a36c0-4aed-492a-b678-83c1eaef931b for this chassis.
Jan 26 13:29:17 np0005596060 ovn_controller[148842]: 2026-01-26T18:29:17Z|00101|binding|INFO|576a36c0-4aed-492a-b678-83c1eaef931b: Claiming fa:16:3e:53:c8:1e 10.100.0.9
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.033 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.038 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:17.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:17 np0005596060 systemd-udevd[282920]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:29:17 np0005596060 systemd-machined[213879]: New machine qemu-8-instance-00000012.
Jan 26 13:29:17 np0005596060 NetworkManager[48900]: <info>  [1769452157.0911] device (tap576a36c0-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:29:17 np0005596060 NetworkManager[48900]: <info>  [1769452157.0924] device (tap576a36c0-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:29:17 np0005596060 systemd[1]: Started Virtual Machine qemu-8-instance-00000012.
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.111 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:17 np0005596060 ovn_controller[148842]: 2026-01-26T18:29:17Z|00102|binding|INFO|Setting lport 576a36c0-4aed-492a-b678-83c1eaef931b ovn-installed in OVS
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.118 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:17 np0005596060 ovn_controller[148842]: 2026-01-26T18:29:17Z|00103|binding|INFO|Setting lport 576a36c0-4aed-492a-b678-83c1eaef931b up in Southbound
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.241 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:c8:1e 10.100.0.9'], port_security=['fa:16:3e:53:c8:1e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ede36747-ccc3-4077-b6f0-a5a6663f4cd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2b1e48060904db7a7d629fffdaa921a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0c93d08d-c0a8-4947-b001-f618e8c0b8aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eb7435b-663a-4566-9286-29c15a28c76b, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=576a36c0-4aed-492a-b678-83c1eaef931b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.243 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 576a36c0-4aed-492a-b678-83c1eaef931b in datapath 3f70dd9e-997c-43d9-abf7-8ac842dc7a2a bound to our chassis#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.245 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f70dd9e-997c-43d9-abf7-8ac842dc7a2a#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.263 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7f7296a4-543c-40b0-9af6-7e6571a8bc46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.265 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f70dd9e-91 in ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.267 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f70dd9e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.267 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[61b93a34-1762-4510-8c62-0f4e517728fb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.268 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[199145a1-2481-451d-b2ca-bf391dab38a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.280 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[1d232469-06b2-4523-b198-b0e476ab5b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.301 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[aabf8913-ec81-4355-a989-3691ccf3456b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.339 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[af6c9254-de5a-4c8e-a42a-7ca636bb3975]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.344 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[6f882228-8f5b-4dfe-a22b-af0417d932c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 NetworkManager[48900]: <info>  [1769452157.3456] manager: (tap3f70dd9e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Jan 26 13:29:17 np0005596060 systemd-udevd[282928]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:29:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:29:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3633095032' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:29:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:29:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3633095032' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.383 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd5d19c-2b6a-419e-8399-8fdf89b7e85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.387 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[2cf2cdbf-77ed-45d0-a89f-65b0ea928dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 NetworkManager[48900]: <info>  [1769452157.4134] device (tap3f70dd9e-90): carrier: link connected
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.421 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[5afee7d0-e090-4024-bfe5-69b19b6683d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.445 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb52902-08bc-473a-bae9-ad3e5dd9e831]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f70dd9e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:13:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592199, 'reachable_time': 29236, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283069, 'error': None, 'target': 'ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.467 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1b5298-1a0e-47d0-98dc-46ee81bfb543]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6d:1396'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592199, 'tstamp': 592199}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283070, 'error': None, 'target': 'ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.485 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7cec3f11-6145-40d3-a521-cff3c3ae65aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f70dd9e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6d:13:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592199, 'reachable_time': 29236, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283071, 'error': None, 'target': 'ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.530 247428 DEBUG nova.compute.manager [req-19d9ab27-532a-40e5-a7ea-0c9d350646f2 req-689ed142-606b-479e-be59-7acaae2bd9af 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Received event network-vif-plugged-576a36c0-4aed-492a-b678-83c1eaef931b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.530 247428 DEBUG oslo_concurrency.lockutils [req-19d9ab27-532a-40e5-a7ea-0c9d350646f2 req-689ed142-606b-479e-be59-7acaae2bd9af 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.530 247428 DEBUG oslo_concurrency.lockutils [req-19d9ab27-532a-40e5-a7ea-0c9d350646f2 req-689ed142-606b-479e-be59-7acaae2bd9af 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.531 247428 DEBUG oslo_concurrency.lockutils [req-19d9ab27-532a-40e5-a7ea-0c9d350646f2 req-689ed142-606b-479e-be59-7acaae2bd9af 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.531 247428 DEBUG nova.compute.manager [req-19d9ab27-532a-40e5-a7ea-0c9d350646f2 req-689ed142-606b-479e-be59-7acaae2bd9af 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Processing event network-vif-plugged-576a36c0-4aed-492a-b678-83c1eaef931b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.532 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5b52cd-4b9a-4c2e-8296-5745b9e3fff7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.613 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a42e5687-87cd-43b2-9b6f-3d088b43edc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.616 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f70dd9e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.617 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.617 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f70dd9e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:29:17 np0005596060 NetworkManager[48900]: <info>  [1769452157.6209] manager: (tap3f70dd9e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Jan 26 13:29:17 np0005596060 kernel: tap3f70dd9e-90: entered promiscuous mode
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.620 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.622 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.624 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f70dd9e-90, col_values=(('external_ids', {'iface-id': 'c02a9bd5-7753-480e-86c4-d809dead851d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.625 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:17 np0005596060 ovn_controller[148842]: 2026-01-26T18:29:17Z|00104|binding|INFO|Releasing lport c02a9bd5-7753-480e-86c4-d809dead851d from this chassis (sb_readonly=0)
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.626 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.627 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f70dd9e-997c-43d9-abf7-8ac842dc7a2a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f70dd9e-997c-43d9-abf7-8ac842dc7a2a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.628 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a18aa4a9-d91d-45b4-92ae-384d253cb8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.630 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/3f70dd9e-997c-43d9-abf7-8ac842dc7a2a.pid.haproxy
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 3f70dd9e-997c-43d9-abf7-8ac842dc7a2a
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:29:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:17.631 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a', 'env', 'PROCESS_TAG=haproxy-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f70dd9e-997c-43d9-abf7-8ac842dc7a2a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.643 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.674 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.675 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.675 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.675 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.675 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:29:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:29:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:29:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:29:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.851 247428 DEBUG nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.853 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452157.8508582, ede36747-ccc3-4077-b6f0-a5a6663f4cd7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.854 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] VM Started (Lifecycle Event)#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.861 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.865 247428 INFO nova.virt.libvirt.driver [-] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Instance spawned successfully.#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.866 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.897 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.903 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.905 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.906 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.906 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.907 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.908 247428 DEBUG nova.virt.libvirt.driver [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.914 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.954 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.955 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452157.852818, ede36747-ccc3-4077-b6f0-a5a6663f4cd7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.955 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.985 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.991 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452157.861106, ede36747-ccc3-4077-b6f0-a5a6663f4cd7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:29:17 np0005596060 nova_compute[247421]: 2026-01-26 18:29:17.991 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.002 247428 INFO nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Took 9.33 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.003 247428 DEBUG nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:29:18 np0005596060 podman[283182]: 2026-01-26 18:29:18.013707486 +0000 UTC m=+0.047645093 container create cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.019 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.022 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6d1c820d-b489-4efb-af1f-82e9d1cce51f does not exist
Jan 26 13:29:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 16b47323-ce25-4e01-91cf-b6b22a8b8b90 does not exist
Jan 26 13:29:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 89ee0eb4-9ec7-4624-a12e-d5f458c9b92c does not exist
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.045 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:29:18 np0005596060 systemd[1]: Started libpod-conmon-cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad.scope.
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.084 247428 INFO nova.compute.manager [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Took 10.44 seconds to build instance.#033[00m
Jan 26 13:29:18 np0005596060 podman[283182]: 2026-01-26 18:29:17.987991842 +0000 UTC m=+0.021929479 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:29:18 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:29:18 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7aff024a49d5bbe341c4c1e23deae691dd9f60c9819bd528e1d8c90bfc1580e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.102 247428 DEBUG oslo_concurrency.lockutils [None req-a87809fb-5fd6-4478-a0ee-fd5a6a47ecda 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:18 np0005596060 podman[283182]: 2026-01-26 18:29:18.113982177 +0000 UTC m=+0.147919824 container init cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524360569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:29:18 np0005596060 podman[283182]: 2026-01-26 18:29:18.130832826 +0000 UTC m=+0.164770433 container start cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.146 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:18 np0005596060 neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a[283205]: [NOTICE]   (283234) : New worker (283253) forked
Jan 26 13:29:18 np0005596060 neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a[283205]: [NOTICE]   (283234) : Loading success.
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.223 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.223 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.395 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.396 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4700MB free_disk=20.946773529052734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.396 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.396 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.474 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance ede36747-ccc3-4077-b6f0-a5a6663f4cd7 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.474 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.475 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:29:18 np0005596060 nova_compute[247421]: 2026-01-26 18:29:18.509 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 290 KiB/s rd, 3.6 MiB/s wr, 117 op/s
Jan 26 13:29:18 np0005596060 podman[283356]: 2026-01-26 18:29:18.645089098 +0000 UTC m=+0.039936097 container create 1cc2ea71d6942213287669f35f2a9efc76b8139540e6fed9710331fcb48b7e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:29:18 np0005596060 systemd[1]: Started libpod-conmon-1cc2ea71d6942213287669f35f2a9efc76b8139540e6fed9710331fcb48b7e91.scope.
Jan 26 13:29:18 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:29:18 np0005596060 podman[283356]: 2026-01-26 18:29:18.717261084 +0000 UTC m=+0.112108083 container init 1cc2ea71d6942213287669f35f2a9efc76b8139540e6fed9710331fcb48b7e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:29:18 np0005596060 podman[283356]: 2026-01-26 18:29:18.627335196 +0000 UTC m=+0.022182215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:29:18 np0005596060 podman[283356]: 2026-01-26 18:29:18.72339219 +0000 UTC m=+0.118239189 container start 1cc2ea71d6942213287669f35f2a9efc76b8139540e6fed9710331fcb48b7e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:29:18 np0005596060 podman[283356]: 2026-01-26 18:29:18.726705554 +0000 UTC m=+0.121552553 container attach 1cc2ea71d6942213287669f35f2a9efc76b8139540e6fed9710331fcb48b7e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 13:29:18 np0005596060 admiring_archimedes[283391]: 167 167
Jan 26 13:29:18 np0005596060 systemd[1]: libpod-1cc2ea71d6942213287669f35f2a9efc76b8139540e6fed9710331fcb48b7e91.scope: Deactivated successfully.
Jan 26 13:29:18 np0005596060 conmon[283391]: conmon 1cc2ea71d69422132876 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1cc2ea71d6942213287669f35f2a9efc76b8139540e6fed9710331fcb48b7e91.scope/container/memory.events
Jan 26 13:29:18 np0005596060 podman[283356]: 2026-01-26 18:29:18.730724727 +0000 UTC m=+0.125571726 container died 1cc2ea71d6942213287669f35f2a9efc76b8139540e6fed9710331fcb48b7e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 13:29:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-247d6b936a80d5f158281787927c624605b0d7cacdb3e9542a542b76885c3e81-merged.mount: Deactivated successfully.
Jan 26 13:29:18 np0005596060 podman[283356]: 2026-01-26 18:29:18.769588455 +0000 UTC m=+0.164435454 container remove 1cc2ea71d6942213287669f35f2a9efc76b8139540e6fed9710331fcb48b7e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:29:18 np0005596060 systemd[1]: libpod-conmon-1cc2ea71d6942213287669f35f2a9efc76b8139540e6fed9710331fcb48b7e91.scope: Deactivated successfully.
Jan 26 13:29:18 np0005596060 podman[283416]: 2026-01-26 18:29:18.93448591 +0000 UTC m=+0.042995525 container create 4388640830f1a6d9bf2fdb12d6bafe634aa28d0088c8e572f61bf53e93c00878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:29:18 np0005596060 systemd[1]: Started libpod-conmon-4388640830f1a6d9bf2fdb12d6bafe634aa28d0088c8e572f61bf53e93c00878.scope.
Jan 26 13:29:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:18.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:19 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:29:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028f07ab08cf3f48bd346f39b1c44d53419e6473d80f8c461ef9d299ba182da2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028f07ab08cf3f48bd346f39b1c44d53419e6473d80f8c461ef9d299ba182da2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028f07ab08cf3f48bd346f39b1c44d53419e6473d80f8c461ef9d299ba182da2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028f07ab08cf3f48bd346f39b1c44d53419e6473d80f8c461ef9d299ba182da2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028f07ab08cf3f48bd346f39b1c44d53419e6473d80f8c461ef9d299ba182da2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:19 np0005596060 podman[283416]: 2026-01-26 18:29:18.917407926 +0000 UTC m=+0.025917541 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:29:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:29:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809252509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:29:19 np0005596060 podman[283416]: 2026-01-26 18:29:19.040065166 +0000 UTC m=+0.148574821 container init 4388640830f1a6d9bf2fdb12d6bafe634aa28d0088c8e572f61bf53e93c00878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 13:29:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:19.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:19 np0005596060 podman[283416]: 2026-01-26 18:29:19.048399518 +0000 UTC m=+0.156909133 container start 4388640830f1a6d9bf2fdb12d6bafe634aa28d0088c8e572f61bf53e93c00878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.048 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:19 np0005596060 podman[283416]: 2026-01-26 18:29:19.05318991 +0000 UTC m=+0.161699545 container attach 4388640830f1a6d9bf2fdb12d6bafe634aa28d0088c8e572f61bf53e93c00878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.064 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.117 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.150 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.151 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.687 247428 DEBUG nova.compute.manager [req-44bb7e41-aee5-439b-9dad-b9d316e864f5 req-9b43baf2-df4a-48c3-a2dc-f6af3eda9206 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Received event network-vif-plugged-576a36c0-4aed-492a-b678-83c1eaef931b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.687 247428 DEBUG oslo_concurrency.lockutils [req-44bb7e41-aee5-439b-9dad-b9d316e864f5 req-9b43baf2-df4a-48c3-a2dc-f6af3eda9206 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.688 247428 DEBUG oslo_concurrency.lockutils [req-44bb7e41-aee5-439b-9dad-b9d316e864f5 req-9b43baf2-df4a-48c3-a2dc-f6af3eda9206 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.688 247428 DEBUG oslo_concurrency.lockutils [req-44bb7e41-aee5-439b-9dad-b9d316e864f5 req-9b43baf2-df4a-48c3-a2dc-f6af3eda9206 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.688 247428 DEBUG nova.compute.manager [req-44bb7e41-aee5-439b-9dad-b9d316e864f5 req-9b43baf2-df4a-48c3-a2dc-f6af3eda9206 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] No waiting events found dispatching network-vif-plugged-576a36c0-4aed-492a-b678-83c1eaef931b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.688 247428 WARNING nova.compute.manager [req-44bb7e41-aee5-439b-9dad-b9d316e864f5 req-9b43baf2-df4a-48c3-a2dc-f6af3eda9206 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Received unexpected event network-vif-plugged-576a36c0-4aed-492a-b678-83c1eaef931b for instance with vm_state active and task_state None.#033[00m
Jan 26 13:29:19 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:19.922 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:29:19 np0005596060 nova_compute[247421]: 2026-01-26 18:29:19.922 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:19 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:19.925 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:29:19 np0005596060 recursing_perlman[283432]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:29:19 np0005596060 recursing_perlman[283432]: --> relative data size: 1.0
Jan 26 13:29:19 np0005596060 recursing_perlman[283432]: --> All data devices are unavailable
Jan 26 13:29:19 np0005596060 systemd[1]: libpod-4388640830f1a6d9bf2fdb12d6bafe634aa28d0088c8e572f61bf53e93c00878.scope: Deactivated successfully.
Jan 26 13:29:19 np0005596060 podman[283416]: 2026-01-26 18:29:19.973560493 +0000 UTC m=+1.082070108 container died 4388640830f1a6d9bf2fdb12d6bafe634aa28d0088c8e572f61bf53e93c00878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:29:20 np0005596060 systemd[1]: var-lib-containers-storage-overlay-028f07ab08cf3f48bd346f39b1c44d53419e6473d80f8c461ef9d299ba182da2-merged.mount: Deactivated successfully.
Jan 26 13:29:20 np0005596060 podman[283416]: 2026-01-26 18:29:20.032850411 +0000 UTC m=+1.141360026 container remove 4388640830f1a6d9bf2fdb12d6bafe634aa28d0088c8e572f61bf53e93c00878 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:29:20 np0005596060 systemd[1]: libpod-conmon-4388640830f1a6d9bf2fdb12d6bafe634aa28d0088c8e572f61bf53e93c00878.scope: Deactivated successfully.
Jan 26 13:29:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 288 KiB/s rd, 3.1 MiB/s wr, 112 op/s
Jan 26 13:29:20 np0005596060 podman[283604]: 2026-01-26 18:29:20.660816836 +0000 UTC m=+0.041894236 container create 19de8d6b0dbfb6cb8f841090ceb785faf84d1612e5df277fd170917fa0a3f839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:29:20 np0005596060 systemd[1]: Started libpod-conmon-19de8d6b0dbfb6cb8f841090ceb785faf84d1612e5df277fd170917fa0a3f839.scope.
Jan 26 13:29:20 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:29:20 np0005596060 podman[283604]: 2026-01-26 18:29:20.734430629 +0000 UTC m=+0.115508029 container init 19de8d6b0dbfb6cb8f841090ceb785faf84d1612e5df277fd170917fa0a3f839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:29:20 np0005596060 podman[283604]: 2026-01-26 18:29:20.642721736 +0000 UTC m=+0.023799156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:29:20 np0005596060 podman[283604]: 2026-01-26 18:29:20.741700554 +0000 UTC m=+0.122777954 container start 19de8d6b0dbfb6cb8f841090ceb785faf84d1612e5df277fd170917fa0a3f839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 13:29:20 np0005596060 nova_compute[247421]: 2026-01-26 18:29:20.742 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:20 np0005596060 podman[283604]: 2026-01-26 18:29:20.745160922 +0000 UTC m=+0.126238342 container attach 19de8d6b0dbfb6cb8f841090ceb785faf84d1612e5df277fd170917fa0a3f839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:29:20 np0005596060 focused_gould[283621]: 167 167
Jan 26 13:29:20 np0005596060 systemd[1]: libpod-19de8d6b0dbfb6cb8f841090ceb785faf84d1612e5df277fd170917fa0a3f839.scope: Deactivated successfully.
Jan 26 13:29:20 np0005596060 podman[283604]: 2026-01-26 18:29:20.747886422 +0000 UTC m=+0.128963822 container died 19de8d6b0dbfb6cb8f841090ceb785faf84d1612e5df277fd170917fa0a3f839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:29:20 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f6df75fa98c409ad1865e3d5fb13ae078df357e71e614fbef0778c16121d0b6a-merged.mount: Deactivated successfully.
Jan 26 13:29:20 np0005596060 podman[283604]: 2026-01-26 18:29:20.785029316 +0000 UTC m=+0.166106716 container remove 19de8d6b0dbfb6cb8f841090ceb785faf84d1612e5df277fd170917fa0a3f839 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_gould, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:29:20 np0005596060 systemd[1]: libpod-conmon-19de8d6b0dbfb6cb8f841090ceb785faf84d1612e5df277fd170917fa0a3f839.scope: Deactivated successfully.
Jan 26 13:29:20 np0005596060 podman[283645]: 2026-01-26 18:29:20.987822895 +0000 UTC m=+0.069756055 container create fb44b247335812881e78406d2032ba90dfce008a2fc582c2c1b44c4c0d1b2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 13:29:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:21.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:21 np0005596060 podman[283645]: 2026-01-26 18:29:20.939580448 +0000 UTC m=+0.021513628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:29:21 np0005596060 systemd[1]: Started libpod-conmon-fb44b247335812881e78406d2032ba90dfce008a2fc582c2c1b44c4c0d1b2ece.scope.
Jan 26 13:29:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:21.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:21 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:29:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f575fd3bc945ab0a9a4ad03a303a9e53b4029e93ec8432522d0d147c8abab0dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f575fd3bc945ab0a9a4ad03a303a9e53b4029e93ec8432522d0d147c8abab0dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f575fd3bc945ab0a9a4ad03a303a9e53b4029e93ec8432522d0d147c8abab0dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f575fd3bc945ab0a9a4ad03a303a9e53b4029e93ec8432522d0d147c8abab0dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:21 np0005596060 podman[283645]: 2026-01-26 18:29:21.091637276 +0000 UTC m=+0.173570456 container init fb44b247335812881e78406d2032ba90dfce008a2fc582c2c1b44c4c0d1b2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 13:29:21 np0005596060 podman[283645]: 2026-01-26 18:29:21.10082236 +0000 UTC m=+0.182755520 container start fb44b247335812881e78406d2032ba90dfce008a2fc582c2c1b44c4c0d1b2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackburn, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:29:21 np0005596060 podman[283645]: 2026-01-26 18:29:21.104676618 +0000 UTC m=+0.186609798 container attach fb44b247335812881e78406d2032ba90dfce008a2fc582c2c1b44c4c0d1b2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:29:21 np0005596060 nova_compute[247421]: 2026-01-26 18:29:21.841 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]: {
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:    "1": [
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:        {
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "devices": [
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "/dev/loop3"
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            ],
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "lv_name": "ceph_lv0",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "lv_size": "7511998464",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "name": "ceph_lv0",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "tags": {
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.cluster_name": "ceph",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.crush_device_class": "",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.encrypted": "0",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.osd_id": "1",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.type": "block",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:                "ceph.vdo": "0"
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            },
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "type": "block",
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:            "vg_name": "ceph_vg0"
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:        }
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]:    ]
Jan 26 13:29:21 np0005596060 beautiful_blackburn[283661]: }
Jan 26 13:29:21 np0005596060 systemd[1]: libpod-fb44b247335812881e78406d2032ba90dfce008a2fc582c2c1b44c4c0d1b2ece.scope: Deactivated successfully.
Jan 26 13:29:21 np0005596060 podman[283645]: 2026-01-26 18:29:21.904618178 +0000 UTC m=+0.986551338 container died fb44b247335812881e78406d2032ba90dfce008a2fc582c2c1b44c4c0d1b2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackburn, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:29:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f575fd3bc945ab0a9a4ad03a303a9e53b4029e93ec8432522d0d147c8abab0dc-merged.mount: Deactivated successfully.
Jan 26 13:29:22 np0005596060 podman[283645]: 2026-01-26 18:29:22.071813002 +0000 UTC m=+1.153746162 container remove fb44b247335812881e78406d2032ba90dfce008a2fc582c2c1b44c4c0d1b2ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_blackburn, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:29:22 np0005596060 systemd[1]: libpod-conmon-fb44b247335812881e78406d2032ba90dfce008a2fc582c2c1b44c4c0d1b2ece.scope: Deactivated successfully.
Jan 26 13:29:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 238 op/s
Jan 26 13:29:22 np0005596060 podman[283822]: 2026-01-26 18:29:22.726761823 +0000 UTC m=+0.051685075 container create 81ab42888084332d91b103112201575cbe4a01ae213f15c369a05047217cfed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:29:22 np0005596060 systemd[1]: Started libpod-conmon-81ab42888084332d91b103112201575cbe4a01ae213f15c369a05047217cfed4.scope.
Jan 26 13:29:22 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:29:22 np0005596060 podman[283822]: 2026-01-26 18:29:22.70226619 +0000 UTC m=+0.027189462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:29:22 np0005596060 podman[283822]: 2026-01-26 18:29:22.816704581 +0000 UTC m=+0.141627843 container init 81ab42888084332d91b103112201575cbe4a01ae213f15c369a05047217cfed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:29:22 np0005596060 podman[283822]: 2026-01-26 18:29:22.823055003 +0000 UTC m=+0.147978245 container start 81ab42888084332d91b103112201575cbe4a01ae213f15c369a05047217cfed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:29:22 np0005596060 podman[283822]: 2026-01-26 18:29:22.826566982 +0000 UTC m=+0.151490244 container attach 81ab42888084332d91b103112201575cbe4a01ae213f15c369a05047217cfed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:29:22 np0005596060 pensive_curie[283839]: 167 167
Jan 26 13:29:22 np0005596060 systemd[1]: libpod-81ab42888084332d91b103112201575cbe4a01ae213f15c369a05047217cfed4.scope: Deactivated successfully.
Jan 26 13:29:22 np0005596060 podman[283822]: 2026-01-26 18:29:22.829450096 +0000 UTC m=+0.154373368 container died 81ab42888084332d91b103112201575cbe4a01ae213f15c369a05047217cfed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:29:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-20beed6bb5074231042c77b1ed64ccb3a87b6b8584e27d31b82523870ded47aa-merged.mount: Deactivated successfully.
Jan 26 13:29:22 np0005596060 podman[283822]: 2026-01-26 18:29:22.867093443 +0000 UTC m=+0.192016685 container remove 81ab42888084332d91b103112201575cbe4a01ae213f15c369a05047217cfed4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_curie, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:29:22 np0005596060 systemd[1]: libpod-conmon-81ab42888084332d91b103112201575cbe4a01ae213f15c369a05047217cfed4.scope: Deactivated successfully.
Jan 26 13:29:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:23.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:23.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:23 np0005596060 podman[283864]: 2026-01-26 18:29:23.048307903 +0000 UTC m=+0.048868544 container create b1e9f8380ad1dd026b37560686c87bfcd624eb57a01c420bbbfc80baf5022b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:29:23 np0005596060 systemd[1]: Started libpod-conmon-b1e9f8380ad1dd026b37560686c87bfcd624eb57a01c420bbbfc80baf5022b7d.scope.
Jan 26 13:29:23 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:29:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ddfc824142d613512c75104cc9419c2b84d91bb869aa054c7b17df066e6293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ddfc824142d613512c75104cc9419c2b84d91bb869aa054c7b17df066e6293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ddfc824142d613512c75104cc9419c2b84d91bb869aa054c7b17df066e6293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ddfc824142d613512c75104cc9419c2b84d91bb869aa054c7b17df066e6293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:29:23 np0005596060 podman[283864]: 2026-01-26 18:29:23.027150405 +0000 UTC m=+0.027711106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:29:23 np0005596060 podman[283864]: 2026-01-26 18:29:23.134067945 +0000 UTC m=+0.134628616 container init b1e9f8380ad1dd026b37560686c87bfcd624eb57a01c420bbbfc80baf5022b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 13:29:23 np0005596060 podman[283864]: 2026-01-26 18:29:23.140632642 +0000 UTC m=+0.141193283 container start b1e9f8380ad1dd026b37560686c87bfcd624eb57a01c420bbbfc80baf5022b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 13:29:23 np0005596060 podman[283864]: 2026-01-26 18:29:23.144856289 +0000 UTC m=+0.145416960 container attach b1e9f8380ad1dd026b37560686c87bfcd624eb57a01c420bbbfc80baf5022b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.150 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.568 247428 DEBUG oslo_concurrency.lockutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Acquiring lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.568 247428 DEBUG oslo_concurrency.lockutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.569 247428 DEBUG oslo_concurrency.lockutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Acquiring lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.569 247428 DEBUG oslo_concurrency.lockutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.569 247428 DEBUG oslo_concurrency.lockutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.570 247428 INFO nova.compute.manager [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Terminating instance#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.571 247428 DEBUG nova.compute.manager [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:29:23 np0005596060 kernel: tap576a36c0-4a (unregistering): left promiscuous mode
Jan 26 13:29:23 np0005596060 NetworkManager[48900]: <info>  [1769452163.6265] device (tap576a36c0-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:29:23 np0005596060 ovn_controller[148842]: 2026-01-26T18:29:23Z|00105|binding|INFO|Releasing lport 576a36c0-4aed-492a-b678-83c1eaef931b from this chassis (sb_readonly=0)
Jan 26 13:29:23 np0005596060 ovn_controller[148842]: 2026-01-26T18:29:23Z|00106|binding|INFO|Setting lport 576a36c0-4aed-492a-b678-83c1eaef931b down in Southbound
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.635 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:23 np0005596060 ovn_controller[148842]: 2026-01-26T18:29:23Z|00107|binding|INFO|Removing iface tap576a36c0-4a ovn-installed in OVS
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.637 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.647 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:c8:1e 10.100.0.9'], port_security=['fa:16:3e:53:c8:1e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ede36747-ccc3-4077-b6f0-a5a6663f4cd7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f2b1e48060904db7a7d629fffdaa921a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0c93d08d-c0a8-4947-b001-f618e8c0b8aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4eb7435b-663a-4566-9286-29c15a28c76b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=576a36c0-4aed-492a-b678-83c1eaef931b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.648 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 576a36c0-4aed-492a-b678-83c1eaef931b in datapath 3f70dd9e-997c-43d9-abf7-8ac842dc7a2a unbound from our chassis#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.649 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f70dd9e-997c-43d9-abf7-8ac842dc7a2a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.650 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5c4f2952-f85a-471f-9e4f-69f868203371]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.651 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a namespace which is not needed anymore#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.661 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:23 np0005596060 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 26 13:29:23 np0005596060 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000012.scope: Consumed 6.471s CPU time.
Jan 26 13:29:23 np0005596060 systemd-machined[213879]: Machine qemu-8-instance-00000012 terminated.
Jan 26 13:29:23 np0005596060 neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a[283205]: [NOTICE]   (283234) : haproxy version is 2.8.14-c23fe91
Jan 26 13:29:23 np0005596060 neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a[283205]: [NOTICE]   (283234) : path to executable is /usr/sbin/haproxy
Jan 26 13:29:23 np0005596060 neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a[283205]: [WARNING]  (283234) : Exiting Master process...
Jan 26 13:29:23 np0005596060 neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a[283205]: [ALERT]    (283234) : Current worker (283253) exited with code 143 (Terminated)
Jan 26 13:29:23 np0005596060 neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a[283205]: [WARNING]  (283234) : All workers exited. Exiting... (0)
Jan 26 13:29:23 np0005596060 systemd[1]: libpod-cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad.scope: Deactivated successfully.
Jan 26 13:29:23 np0005596060 podman[283909]: 2026-01-26 18:29:23.787959309 +0000 UTC m=+0.045094218 container died cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.808 247428 INFO nova.virt.libvirt.driver [-] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Instance destroyed successfully.#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.809 247428 DEBUG nova.objects.instance [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lazy-loading 'resources' on Instance uuid ede36747-ccc3-4077-b6f0-a5a6663f4cd7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:29:23 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad-userdata-shm.mount: Deactivated successfully.
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.825 247428 DEBUG nova.virt.libvirt.vif [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:29:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-1610684880',display_name='tempest-TestServerMultinode-server-1610684880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-1610684880',id=18,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:29:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f2b1e48060904db7a7d629fffdaa921a',ramdisk_id='',reservation_id='r-asirxge5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-128980879',owner_user_name='tempest-TestServerMultinode-128980879-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:29:18Z,user_data=None,user_id='87b6f2cd2d124de2be281e270184d195',uuid=ede36747-ccc3-4077-b6f0-a5a6663f4cd7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "576a36c0-4aed-492a-b678-83c1eaef931b", "address": "fa:16:3e:53:c8:1e", "network": {"id": "3f70dd9e-997c-43d9-abf7-8ac842dc7a2a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1075445344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dd033a95e4c454f82b471fb31b8c978", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap576a36c0-4a", "ovs_interfaceid": "576a36c0-4aed-492a-b678-83c1eaef931b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.826 247428 DEBUG nova.network.os_vif_util [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Converting VIF {"id": "576a36c0-4aed-492a-b678-83c1eaef931b", "address": "fa:16:3e:53:c8:1e", "network": {"id": "3f70dd9e-997c-43d9-abf7-8ac842dc7a2a", "bridge": "br-int", "label": "tempest-TestServerMultinode-1075445344-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1dd033a95e4c454f82b471fb31b8c978", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap576a36c0-4a", "ovs_interfaceid": "576a36c0-4aed-492a-b678-83c1eaef931b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.826 247428 DEBUG nova.network.os_vif_util [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:c8:1e,bridge_name='br-int',has_traffic_filtering=True,id=576a36c0-4aed-492a-b678-83c1eaef931b,network=Network(3f70dd9e-997c-43d9-abf7-8ac842dc7a2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap576a36c0-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.826 247428 DEBUG os_vif [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c8:1e,bridge_name='br-int',has_traffic_filtering=True,id=576a36c0-4aed-492a-b678-83c1eaef931b,network=Network(3f70dd9e-997c-43d9-abf7-8ac842dc7a2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap576a36c0-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.828 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.828 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap576a36c0-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:29:23 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b7aff024a49d5bbe341c4c1e23deae691dd9f60c9819bd528e1d8c90bfc1580e-merged.mount: Deactivated successfully.
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.830 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.833 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.838 247428 INFO os_vif [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:c8:1e,bridge_name='br-int',has_traffic_filtering=True,id=576a36c0-4aed-492a-b678-83c1eaef931b,network=Network(3f70dd9e-997c-43d9-abf7-8ac842dc7a2a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap576a36c0-4a')#033[00m
Jan 26 13:29:23 np0005596060 podman[283909]: 2026-01-26 18:29:23.870902429 +0000 UTC m=+0.128037338 container cleanup cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:29:23 np0005596060 systemd[1]: libpod-conmon-cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad.scope: Deactivated successfully.
Jan 26 13:29:23 np0005596060 podman[283976]: 2026-01-26 18:29:23.944932272 +0000 UTC m=+0.048058194 container remove cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.949 247428 DEBUG nova.compute.manager [req-db80365f-03b4-4756-9dad-7555946c667c req-79f57f3b-45a2-496a-96fe-11583a828aea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Received event network-vif-unplugged-576a36c0-4aed-492a-b678-83c1eaef931b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.950 247428 DEBUG oslo_concurrency.lockutils [req-db80365f-03b4-4756-9dad-7555946c667c req-79f57f3b-45a2-496a-96fe-11583a828aea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.950 247428 DEBUG oslo_concurrency.lockutils [req-db80365f-03b4-4756-9dad-7555946c667c req-79f57f3b-45a2-496a-96fe-11583a828aea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.950 247428 DEBUG oslo_concurrency.lockutils [req-db80365f-03b4-4756-9dad-7555946c667c req-79f57f3b-45a2-496a-96fe-11583a828aea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.950 247428 DEBUG nova.compute.manager [req-db80365f-03b4-4756-9dad-7555946c667c req-79f57f3b-45a2-496a-96fe-11583a828aea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] No waiting events found dispatching network-vif-unplugged-576a36c0-4aed-492a-b678-83c1eaef931b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.950 247428 DEBUG nova.compute.manager [req-db80365f-03b4-4756-9dad-7555946c667c req-79f57f3b-45a2-496a-96fe-11583a828aea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Received event network-vif-unplugged-576a36c0-4aed-492a-b678-83c1eaef931b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.951 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf5057e-daa8-44d4-baa4-b1e4be1bb56e]: (4, ('Mon Jan 26 06:29:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a (cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad)\ncd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad\nMon Jan 26 06:29:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a (cd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad)\ncd900c888497ff080b6f985f91a156b4fe579c3f35c10ba92a83223c056407ad\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.953 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e36f8b32-4530-457d-9cb6-cad629b339c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.954 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f70dd9e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.955 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:23 np0005596060 kernel: tap3f70dd9e-90: left promiscuous mode
Jan 26 13:29:23 np0005596060 nostalgic_jemison[283881]: {
Jan 26 13:29:23 np0005596060 nostalgic_jemison[283881]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:29:23 np0005596060 nostalgic_jemison[283881]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:29:23 np0005596060 nostalgic_jemison[283881]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:29:23 np0005596060 nostalgic_jemison[283881]:        "osd_id": 1,
Jan 26 13:29:23 np0005596060 nostalgic_jemison[283881]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:29:23 np0005596060 nostalgic_jemison[283881]:        "type": "bluestore"
Jan 26 13:29:23 np0005596060 nostalgic_jemison[283881]:    }
Jan 26 13:29:23 np0005596060 nostalgic_jemison[283881]: }
Jan 26 13:29:23 np0005596060 nova_compute[247421]: 2026-01-26 18:29:23.970 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.973 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[de88c049-a33f-4858-896b-e15339f3ee21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.989 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d2e103-9170-4410-af74-fce8a76afa28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:23.991 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c6776e85-56a4-4836-9b7d-8f640bdc7b98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:24 np0005596060 systemd[1]: libpod-b1e9f8380ad1dd026b37560686c87bfcd624eb57a01c420bbbfc80baf5022b7d.scope: Deactivated successfully.
Jan 26 13:29:24 np0005596060 conmon[283881]: conmon b1e9f8380ad1dd026b37 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1e9f8380ad1dd026b37560686c87bfcd624eb57a01c420bbbfc80baf5022b7d.scope/container/memory.events
Jan 26 13:29:24 np0005596060 podman[283864]: 2026-01-26 18:29:24.003890942 +0000 UTC m=+1.004451583 container died b1e9f8380ad1dd026b37560686c87bfcd624eb57a01c420bbbfc80baf5022b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:29:24 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:24.008 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb1b21b-7db3-45a7-96b0-a0a8a921e39a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592191, 'reachable_time': 32696, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283999, 'error': None, 'target': 'ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:24 np0005596060 systemd[1]: run-netns-ovnmeta\x2d3f70dd9e\x2d997c\x2d43d9\x2dabf7\x2d8ac842dc7a2a.mount: Deactivated successfully.
Jan 26 13:29:24 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:24.011 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f70dd9e-997c-43d9-abf7-8ac842dc7a2a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:29:24 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:24.012 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[34e01528-8269-4896-b0c9-93b699022368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:29:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-83ddfc824142d613512c75104cc9419c2b84d91bb869aa054c7b17df066e6293-merged.mount: Deactivated successfully.
Jan 26 13:29:24 np0005596060 podman[283864]: 2026-01-26 18:29:24.058222564 +0000 UTC m=+1.058783205 container remove b1e9f8380ad1dd026b37560686c87bfcd624eb57a01c420bbbfc80baf5022b7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jemison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:29:24 np0005596060 systemd[1]: libpod-conmon-b1e9f8380ad1dd026b37560686c87bfcd624eb57a01c420bbbfc80baf5022b7d.scope: Deactivated successfully.
Jan 26 13:29:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:29:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:29:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 1caa6b5e-7286-48d6-afb2-ebad152c5926 does not exist
Jan 26 13:29:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 41a110ae-28a2-40ae-923c-ac762eeb6e6d does not exist
Jan 26 13:29:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 62da42a5-98ea-41a8-8848-665d7549ba66 does not exist
Jan 26 13:29:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:24 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:29:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 168 op/s
Jan 26 13:29:24 np0005596060 nova_compute[247421]: 2026-01-26 18:29:24.623 247428 INFO nova.virt.libvirt.driver [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Deleting instance files /var/lib/nova/instances/ede36747-ccc3-4077-b6f0-a5a6663f4cd7_del#033[00m
Jan 26 13:29:24 np0005596060 nova_compute[247421]: 2026-01-26 18:29:24.625 247428 INFO nova.virt.libvirt.driver [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Deletion of /var/lib/nova/instances/ede36747-ccc3-4077-b6f0-a5a6663f4cd7_del complete#033[00m
Jan 26 13:29:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:25.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:25.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:25 np0005596060 nova_compute[247421]: 2026-01-26 18:29:25.149 247428 INFO nova.compute.manager [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Took 1.58 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:29:25 np0005596060 nova_compute[247421]: 2026-01-26 18:29:25.150 247428 DEBUG oslo.service.loopingcall [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:29:25 np0005596060 nova_compute[247421]: 2026-01-26 18:29:25.150 247428 DEBUG nova.compute.manager [-] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:29:25 np0005596060 nova_compute[247421]: 2026-01-26 18:29:25.150 247428 DEBUG nova.network.neutron [-] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.325608) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452165325654, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 468, "num_deletes": 255, "total_data_size": 463658, "memory_usage": 473816, "flush_reason": "Manual Compaction"}
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452165330335, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 448863, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36228, "largest_seqno": 36695, "table_properties": {"data_size": 446017, "index_size": 818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6828, "raw_average_key_size": 19, "raw_value_size": 440310, "raw_average_value_size": 1226, "num_data_blocks": 34, "num_entries": 359, "num_filter_entries": 359, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769452147, "oldest_key_time": 1769452147, "file_creation_time": 1769452165, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 4753 microseconds, and 1937 cpu microseconds.
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.330365) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 448863 bytes OK
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.330381) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.332260) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.332273) EVENT_LOG_v1 {"time_micros": 1769452165332268, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.332288) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 460775, prev total WAL file size 460775, number of live WAL files 2.
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.332679) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(438KB)], [77(10MB)]
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452165332731, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11317841, "oldest_snapshot_seqno": -1}
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6099 keys, 11180119 bytes, temperature: kUnknown
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452165405307, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 11180119, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11137439, "index_size": 26314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 157293, "raw_average_key_size": 25, "raw_value_size": 11025657, "raw_average_value_size": 1807, "num_data_blocks": 1058, "num_entries": 6099, "num_filter_entries": 6099, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769452165, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.405629) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 11180119 bytes
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.407837) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.8 rd, 153.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.4 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(50.1) write-amplify(24.9) OK, records in: 6623, records dropped: 524 output_compression: NoCompression
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.407854) EVENT_LOG_v1 {"time_micros": 1769452165407847, "job": 44, "event": "compaction_finished", "compaction_time_micros": 72648, "compaction_time_cpu_micros": 27484, "output_level": 6, "num_output_files": 1, "total_output_size": 11180119, "num_input_records": 6623, "num_output_records": 6099, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452165408070, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452165410317, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.332632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.410374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.410378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.410379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.410381) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:25 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:29:25.410382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.112 247428 DEBUG nova.compute.manager [req-cdf79c55-980b-4122-a237-23a002a467e2 req-b24ee8ad-47b9-4ec3-ad90-45b946b7a610 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Received event network-vif-plugged-576a36c0-4aed-492a-b678-83c1eaef931b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.113 247428 DEBUG oslo_concurrency.lockutils [req-cdf79c55-980b-4122-a237-23a002a467e2 req-b24ee8ad-47b9-4ec3-ad90-45b946b7a610 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.113 247428 DEBUG oslo_concurrency.lockutils [req-cdf79c55-980b-4122-a237-23a002a467e2 req-b24ee8ad-47b9-4ec3-ad90-45b946b7a610 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.113 247428 DEBUG oslo_concurrency.lockutils [req-cdf79c55-980b-4122-a237-23a002a467e2 req-b24ee8ad-47b9-4ec3-ad90-45b946b7a610 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.113 247428 DEBUG nova.compute.manager [req-cdf79c55-980b-4122-a237-23a002a467e2 req-b24ee8ad-47b9-4ec3-ad90-45b946b7a610 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] No waiting events found dispatching network-vif-plugged-576a36c0-4aed-492a-b678-83c1eaef931b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.114 247428 WARNING nova.compute.manager [req-cdf79c55-980b-4122-a237-23a002a467e2 req-b24ee8ad-47b9-4ec3-ad90-45b946b7a610 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Received unexpected event network-vif-plugged-576a36c0-4aed-492a-b678-83c1eaef931b for instance with vm_state active and task_state deleting.#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.115 247428 DEBUG nova.network.neutron [-] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.149 247428 INFO nova.compute.manager [-] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Took 1.00 seconds to deallocate network for instance.#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.259 247428 DEBUG nova.compute.manager [req-19851efe-2b4d-455b-a54c-050b86ab6e67 req-1f6ab826-b7ed-48a3-b347-5ae678775abc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Received event network-vif-deleted-576a36c0-4aed-492a-b678-83c1eaef931b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.266 247428 DEBUG oslo_concurrency.lockutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.266 247428 DEBUG oslo_concurrency.lockutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.329 247428 DEBUG oslo_concurrency.processutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 120 MiB data, 311 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 171 op/s
Jan 26 13:29:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:29:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2955315145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.769 247428 DEBUG oslo_concurrency.processutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.775 247428 DEBUG nova.compute.provider_tree [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.803 247428 DEBUG nova.scheduler.client.report [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.842 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.861 247428 DEBUG oslo_concurrency.lockutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.909 247428 INFO nova.scheduler.client.report [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Deleted allocations for instance ede36747-ccc3-4077-b6f0-a5a6663f4cd7#033[00m
Jan 26 13:29:26 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:29:26.927 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:29:26 np0005596060 nova_compute[247421]: 2026-01-26 18:29:26.990 247428 DEBUG oslo_concurrency.lockutils [None req-c74bc1a2-78bf-477a-b85d-ca19a09fc29e 87b6f2cd2d124de2be281e270184d195 f2b1e48060904db7a7d629fffdaa921a - - default default] Lock "ede36747-ccc3-4077-b6f0-a5a6663f4cd7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:27.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:27.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1687: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 48 KiB/s wr, 195 op/s
Jan 26 13:29:28 np0005596060 nova_compute[247421]: 2026-01-26 18:29:28.831 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:29.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:29.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 23 KiB/s wr, 156 op/s
Jan 26 13:29:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:31.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:31.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:31 np0005596060 nova_compute[247421]: 2026-01-26 18:29:31.843 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1689: 305 pgs: 305 active+clean; 42 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 24 KiB/s wr, 183 op/s
Jan 26 13:29:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:33.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:33.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:33 np0005596060 nova_compute[247421]: 2026-01-26 18:29:33.834 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 42 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 44 KiB/s rd, 24 KiB/s wr, 58 op/s
Jan 26 13:29:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:35.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:35.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:29:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1831061088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:29:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:29:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1831061088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:29:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 42 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 24 KiB/s wr, 59 op/s
Jan 26 13:29:36 np0005596060 nova_compute[247421]: 2026-01-26 18:29:36.845 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:37.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:37.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 41 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 24 KiB/s wr, 70 op/s
Jan 26 13:29:38 np0005596060 nova_compute[247421]: 2026-01-26 18:29:38.807 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769452163.8058245, ede36747-ccc3-4077-b6f0-a5a6663f4cd7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:29:38 np0005596060 nova_compute[247421]: 2026-01-26 18:29:38.807 247428 INFO nova.compute.manager [-] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:29:38 np0005596060 nova_compute[247421]: 2026-01-26 18:29:38.836 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:38 np0005596060 nova_compute[247421]: 2026-01-26 18:29:38.991 247428 DEBUG nova.compute.manager [None req-2a3b0b02-5b75-4497-bcf6-56adc13215ad - - - - - -] [instance: ede36747-ccc3-4077-b6f0-a5a6663f4cd7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:29:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:39.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:39.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:40 np0005596060 podman[284145]: 2026-01-26 18:29:40.41578408 +0000 UTC m=+0.059254958 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:29:40 np0005596060 podman[284146]: 2026-01-26 18:29:40.452009962 +0000 UTC m=+0.096104126 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:29:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 41 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 1.7 KiB/s wr, 41 op/s
Jan 26 13:29:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:41.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:41.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:41 np0005596060 nova_compute[247421]: 2026-01-26 18:29:41.848 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 41 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.2 KiB/s wr, 49 op/s
Jan 26 13:29:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:43.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:43.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:43 np0005596060 nova_compute[247421]: 2026-01-26 18:29:43.838 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:29:44
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'volumes', '.rgw.root']
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:29:44 np0005596060 nova_compute[247421]: 2026-01-26 18:29:44.527 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 41 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1023 B/s wr, 22 op/s
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:29:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:29:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:45.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:45.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 57 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 652 KiB/s wr, 34 op/s
Jan 26 13:29:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Jan 26 13:29:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Jan 26 13:29:46 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Jan 26 13:29:46 np0005596060 nova_compute[247421]: 2026-01-26 18:29:46.851 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:47.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 53 op/s
Jan 26 13:29:48 np0005596060 nova_compute[247421]: 2026-01-26 18:29:48.840 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:49.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:49.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 53 op/s
Jan 26 13:29:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:51 np0005596060 nova_compute[247421]: 2026-01-26 18:29:51.852 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 26 13:29:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:53.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:53 np0005596060 nova_compute[247421]: 2026-01-26 18:29:53.843 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 26 13:29:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:29:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:29:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:55.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:29:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 MiB/s wr, 34 op/s
Jan 26 13:29:56 np0005596060 nova_compute[247421]: 2026-01-26 18:29:56.854 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:29:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:57.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:29:57 np0005596060 nova_compute[247421]: 2026-01-26 18:29:57.467 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Acquiring lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:57 np0005596060 nova_compute[247421]: 2026-01-26 18:29:57.468 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:57 np0005596060 nova_compute[247421]: 2026-01-26 18:29:57.484 247428 DEBUG nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:29:57 np0005596060 nova_compute[247421]: 2026-01-26 18:29:57.724 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:29:57 np0005596060 nova_compute[247421]: 2026-01-26 18:29:57.725 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:29:57 np0005596060 nova_compute[247421]: 2026-01-26 18:29:57.738 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:29:57 np0005596060 nova_compute[247421]: 2026-01-26 18:29:57.738 247428 INFO nova.compute.claims [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:29:57 np0005596060 nova_compute[247421]: 2026-01-26 18:29:57.947 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:29:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:29:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422745662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:29:58 np0005596060 nova_compute[247421]: 2026-01-26 18:29:58.388 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:29:58 np0005596060 nova_compute[247421]: 2026-01-26 18:29:58.394 247428 DEBUG nova.compute.provider_tree [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:29:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 MiB/s wr, 29 op/s
Jan 26 13:29:58 np0005596060 nova_compute[247421]: 2026-01-26 18:29:58.845 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:29:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:29:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:29:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:29:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:29:59.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:29:59 np0005596060 nova_compute[247421]: 2026-01-26 18:29:59.259 247428 DEBUG nova.scheduler.client.report [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:29:59 np0005596060 nova_compute[247421]: 2026-01-26 18:29:59.962 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:29:59 np0005596060 nova_compute[247421]: 2026-01-26 18:29:59.962 247428 DEBUG nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:30:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 13:30:00 np0005596060 ceph-mon[74267]: overall HEALTH_OK
Jan 26 13:30:00 np0005596060 nova_compute[247421]: 2026-01-26 18:30:00.171 247428 DEBUG nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:30:00 np0005596060 nova_compute[247421]: 2026-01-26 18:30:00.172 247428 DEBUG nova.network.neutron [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:30:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Jan 26 13:30:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285026f0 =====
Jan 26 13:30:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285026f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:01 np0005596060 radosgw[92919]: beast: 0x7fc3285026f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:01.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:01.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:01 np0005596060 nova_compute[247421]: 2026-01-26 18:30:01.118 247428 INFO nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:30:01 np0005596060 nova_compute[247421]: 2026-01-26 18:30:01.226 247428 DEBUG nova.policy [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3bff6d7161b14b1d98f063d24c52c0ca', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4a8e8029f3ed448ea8965530e4aef753', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 26 13:30:01 np0005596060 nova_compute[247421]: 2026-01-26 18:30:01.857 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 3 op/s
Jan 26 13:30:02 np0005596060 nova_compute[247421]: 2026-01-26 18:30:02.873 247428 DEBUG nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:30:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:03.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:03.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:03 np0005596060 nova_compute[247421]: 2026-01-26 18:30:03.848 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:03 np0005596060 nova_compute[247421]: 2026-01-26 18:30:03.886 247428 DEBUG nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:30:03 np0005596060 nova_compute[247421]: 2026-01-26 18:30:03.887 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:30:03 np0005596060 nova_compute[247421]: 2026-01-26 18:30:03.888 247428 INFO nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Creating image(s)#033[00m
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009891035036292573 of space, bias 1.0, pg target 0.2967310510887772 quantized to 32 (current 32)
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:30:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:30:03 np0005596060 nova_compute[247421]: 2026-01-26 18:30:03.918 247428 DEBUG nova.storage.rbd_utils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] rbd image 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:30:03 np0005596060 nova_compute[247421]: 2026-01-26 18:30:03.948 247428 DEBUG nova.storage.rbd_utils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] rbd image 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:30:03 np0005596060 nova_compute[247421]: 2026-01-26 18:30:03.986 247428 DEBUG nova.storage.rbd_utils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] rbd image 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:30:03 np0005596060 nova_compute[247421]: 2026-01-26 18:30:03.990 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:30:04 np0005596060 nova_compute[247421]: 2026-01-26 18:30:04.048 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:30:04 np0005596060 nova_compute[247421]: 2026-01-26 18:30:04.049 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:04 np0005596060 nova_compute[247421]: 2026-01-26 18:30:04.049 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:04 np0005596060 nova_compute[247421]: 2026-01-26 18:30:04.050 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:04 np0005596060 nova_compute[247421]: 2026-01-26 18:30:04.075 247428 DEBUG nova.storage.rbd_utils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] rbd image 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:30:04 np0005596060 nova_compute[247421]: 2026-01-26 18:30:04.078 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:30:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 88 MiB data, 298 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:30:04 np0005596060 nova_compute[247421]: 2026-01-26 18:30:04.713 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:30:04 np0005596060 nova_compute[247421]: 2026-01-26 18:30:04.788 247428 DEBUG nova.storage.rbd_utils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] resizing rbd image 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:30:05 np0005596060 nova_compute[247421]: 2026-01-26 18:30:05.019 247428 DEBUG nova.objects.instance [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lazy-loading 'migration_context' on Instance uuid 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:30:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:05.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:05.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:05 np0005596060 nova_compute[247421]: 2026-01-26 18:30:05.636 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:05.636 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:30:05 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:05.637 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:30:05 np0005596060 nova_compute[247421]: 2026-01-26 18:30:05.759 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:30:05 np0005596060 nova_compute[247421]: 2026-01-26 18:30:05.760 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Ensure instance console log exists: /var/lib/nova/instances/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:30:05 np0005596060 nova_compute[247421]: 2026-01-26 18:30:05.761 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:05 np0005596060 nova_compute[247421]: 2026-01-26 18:30:05.762 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:05 np0005596060 nova_compute[247421]: 2026-01-26 18:30:05.762 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:06 np0005596060 nova_compute[247421]: 2026-01-26 18:30:06.326 247428 DEBUG nova.network.neutron [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Successfully created port: fba64aff-9582-4f05-93c1-c6ef87b0b237 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:30:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 112 MiB data, 310 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 1020 KiB/s wr, 19 op/s
Jan 26 13:30:06 np0005596060 nova_compute[247421]: 2026-01-26 18:30:06.859 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:07.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:07.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:07 np0005596060 nova_compute[247421]: 2026-01-26 18:30:07.822 247428 DEBUG nova.network.neutron [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Successfully updated port: fba64aff-9582-4f05-93c1-c6ef87b0b237 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:30:07 np0005596060 nova_compute[247421]: 2026-01-26 18:30:07.842 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Acquiring lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:30:07 np0005596060 nova_compute[247421]: 2026-01-26 18:30:07.843 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Acquired lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:30:07 np0005596060 nova_compute[247421]: 2026-01-26 18:30:07.843 247428 DEBUG nova.network.neutron [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:30:07 np0005596060 nova_compute[247421]: 2026-01-26 18:30:07.949 247428 DEBUG nova.compute.manager [req-e59c579c-2685-45a9-92db-b4989c65d4af req-ece40ba0-9db6-454b-aa00-110959a84a3a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received event network-changed-fba64aff-9582-4f05-93c1-c6ef87b0b237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:30:07 np0005596060 nova_compute[247421]: 2026-01-26 18:30:07.950 247428 DEBUG nova.compute.manager [req-e59c579c-2685-45a9-92db-b4989c65d4af req-ece40ba0-9db6-454b-aa00-110959a84a3a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Refreshing instance network info cache due to event network-changed-fba64aff-9582-4f05-93c1-c6ef87b0b237. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:30:07 np0005596060 nova_compute[247421]: 2026-01-26 18:30:07.950 247428 DEBUG oslo_concurrency.lockutils [req-e59c579c-2685-45a9-92db-b4989c65d4af req-ece40ba0-9db6-454b-aa00-110959a84a3a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:30:08 np0005596060 nova_compute[247421]: 2026-01-26 18:30:08.164 247428 DEBUG nova.network.neutron [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:30:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 13:30:08 np0005596060 nova_compute[247421]: 2026-01-26 18:30:08.850 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.110 247428 DEBUG nova.network.neutron [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updating instance_info_cache with network_info: [{"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:30:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:09.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:09.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.131 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Releasing lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.131 247428 DEBUG nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Instance network_info: |[{"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.131 247428 DEBUG oslo_concurrency.lockutils [req-e59c579c-2685-45a9-92db-b4989c65d4af req-ece40ba0-9db6-454b-aa00-110959a84a3a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.132 247428 DEBUG nova.network.neutron [req-e59c579c-2685-45a9-92db-b4989c65d4af req-ece40ba0-9db6-454b-aa00-110959a84a3a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Refreshing network info cache for port fba64aff-9582-4f05-93c1-c6ef87b0b237 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.134 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Start _get_guest_xml network_info=[{"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.138 247428 WARNING nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.145 247428 DEBUG nova.virt.libvirt.host [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.146 247428 DEBUG nova.virt.libvirt.host [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.150 247428 DEBUG nova.virt.libvirt.host [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.150 247428 DEBUG nova.virt.libvirt.host [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.151 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.151 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.152 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.152 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.152 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.152 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.153 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.153 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.153 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.153 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.153 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.154 247428 DEBUG nova.virt.hardware [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.156 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:30:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:30:09 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3910912408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.649 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.685 247428 DEBUG nova.storage.rbd_utils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] rbd image 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:30:09 np0005596060 nova_compute[247421]: 2026-01-26 18:30:09.690 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:30:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:30:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2950284908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:30:10 np0005596060 nova_compute[247421]: 2026-01-26 18:30:10.185 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:30:10 np0005596060 nova_compute[247421]: 2026-01-26 18:30:10.187 247428 DEBUG nova.virt.libvirt.vif [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:29:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1896144893',display_name='tempest-TestNetworkBasicOps-server-1896144893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1896144893',id=20,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJD1GAAKPU4UHpAJNj9tg0PYZw2e+wWrjCNIOvavEeEaKY0ulKMjOCWkWjIlQ1hgErBA0KMJ7bweoI5ePPYuhwFVnIGmPDVd3/HS3123LXdTTSMOjBunUJcNc9vJtStXeA==',key_name='tempest-TestNetworkBasicOps-1236305140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a8e8029f3ed448ea8965530e4aef753',ramdisk_id='',reservation_id='r-1lo26kcz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1131412788',owner_user_name='tempest-TestNetworkBasicOps-1131412788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:30:03Z,user_data=None,user_id='3bff6d7161b14b1d98f063d24c52c0ca',uuid=536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:30:10 np0005596060 nova_compute[247421]: 2026-01-26 18:30:10.187 247428 DEBUG nova.network.os_vif_util [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Converting VIF {"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:30:10 np0005596060 nova_compute[247421]: 2026-01-26 18:30:10.188 247428 DEBUG nova.network.os_vif_util [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:cb:52,bridge_name='br-int',has_traffic_filtering=True,id=fba64aff-9582-4f05-93c1-c6ef87b0b237,network=Network(6d45a4b4-37e9-4b54-9a0a-2197d41d528a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba64aff-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:30:10 np0005596060 nova_compute[247421]: 2026-01-26 18:30:10.189 247428 DEBUG nova.objects.instance [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lazy-loading 'pci_devices' on Instance uuid 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:30:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 13:30:10 np0005596060 podman[284504]: 2026-01-26 18:30:10.79606865 +0000 UTC m=+0.058470899 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:30:10 np0005596060 podman[284505]: 2026-01-26 18:30:10.82909835 +0000 UTC m=+0.088027441 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 13:30:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:11.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:11.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:11 np0005596060 nova_compute[247421]: 2026-01-26 18:30:11.862 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.020 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <uuid>536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2</uuid>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <name>instance-00000014</name>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestNetworkBasicOps-server-1896144893</nova:name>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:30:09</nova:creationTime>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <nova:user uuid="3bff6d7161b14b1d98f063d24c52c0ca">tempest-TestNetworkBasicOps-1131412788-project-member</nova:user>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <nova:project uuid="4a8e8029f3ed448ea8965530e4aef753">tempest-TestNetworkBasicOps-1131412788</nova:project>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <nova:port uuid="fba64aff-9582-4f05-93c1-c6ef87b0b237">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <entry name="serial">536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2</entry>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <entry name="uuid">536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2</entry>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk.config">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:f4:cb:52"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <target dev="tapfba64aff-95"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2/console.log" append="off"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:30:12 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:30:12 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:30:12 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:30:12 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.020 247428 DEBUG nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Preparing to wait for external event network-vif-plugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.021 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Acquiring lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.021 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.021 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.022 247428 DEBUG nova.virt.libvirt.vif [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:29:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1896144893',display_name='tempest-TestNetworkBasicOps-server-1896144893',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1896144893',id=20,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJD1GAAKPU4UHpAJNj9tg0PYZw2e+wWrjCNIOvavEeEaKY0ulKMjOCWkWjIlQ1hgErBA0KMJ7bweoI5ePPYuhwFVnIGmPDVd3/HS3123LXdTTSMOjBunUJcNc9vJtStXeA==',key_name='tempest-TestNetworkBasicOps-1236305140',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4a8e8029f3ed448ea8965530e4aef753',ramdisk_id='',reservation_id='r-1lo26kcz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1131412788',owner_user_name='tempest-TestNetworkBasicOps-1131412788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:30:03Z,user_data=None,user_id='3bff6d7161b14b1d98f063d24c52c0ca',uuid=536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.023 247428 DEBUG nova.network.os_vif_util [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Converting VIF {"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.023 247428 DEBUG nova.network.os_vif_util [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:cb:52,bridge_name='br-int',has_traffic_filtering=True,id=fba64aff-9582-4f05-93c1-c6ef87b0b237,network=Network(6d45a4b4-37e9-4b54-9a0a-2197d41d528a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba64aff-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.025 247428 DEBUG os_vif [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:cb:52,bridge_name='br-int',has_traffic_filtering=True,id=fba64aff-9582-4f05-93c1-c6ef87b0b237,network=Network(6d45a4b4-37e9-4b54-9a0a-2197d41d528a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba64aff-95') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.025 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.026 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.026 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.029 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.030 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba64aff-95, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.030 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfba64aff-95, col_values=(('external_ids', {'iface-id': 'fba64aff-9582-4f05-93c1-c6ef87b0b237', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:cb:52', 'vm-uuid': '536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.032 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:12 np0005596060 NetworkManager[48900]: <info>  [1769452212.0332] manager: (tapfba64aff-95): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.034 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.040 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.042 247428 INFO os_vif [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:cb:52,bridge_name='br-int',has_traffic_filtering=True,id=fba64aff-9582-4f05-93c1-c6ef87b0b237,network=Network(6d45a4b4-37e9-4b54-9a0a-2197d41d528a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba64aff-95')#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.321 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.322 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.322 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] No VIF found with MAC fa:16:3e:f4:cb:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.323 247428 INFO nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Using config drive#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.352 247428 DEBUG nova.storage.rbd_utils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] rbd image 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.566 247428 DEBUG nova.network.neutron [req-e59c579c-2685-45a9-92db-b4989c65d4af req-ece40ba0-9db6-454b-aa00-110959a84a3a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updated VIF entry in instance network info cache for port fba64aff-9582-4f05-93c1-c6ef87b0b237. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.566 247428 DEBUG nova.network.neutron [req-e59c579c-2685-45a9-92db-b4989c65d4af req-ece40ba0-9db6-454b-aa00-110959a84a3a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updating instance_info_cache with network_info: [{"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:30:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.592 247428 DEBUG oslo_concurrency.lockutils [req-e59c579c-2685-45a9-92db-b4989c65d4af req-ece40ba0-9db6-454b-aa00-110959a84a3a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:30:12 np0005596060 nova_compute[247421]: 2026-01-26 18:30:12.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.194 247428 INFO nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Creating config drive at /var/lib/nova/instances/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2/disk.config#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.199 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplu829v5f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:30:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:13.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:13.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.333 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplu829v5f" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.361 247428 DEBUG nova.storage.rbd_utils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] rbd image 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.365 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2/disk.config 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.562 247428 DEBUG oslo_concurrency.processutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2/disk.config 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.563 247428 INFO nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Deleting local config drive /var/lib/nova/instances/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2/disk.config because it was imported into RBD.#033[00m
Jan 26 13:30:13 np0005596060 NetworkManager[48900]: <info>  [1769452213.6085] manager: (tapfba64aff-95): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.610 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.613 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.639 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:30:13 np0005596060 kernel: tapfba64aff-95: entered promiscuous mode
Jan 26 13:30:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:13Z|00108|binding|INFO|Claiming lport fba64aff-9582-4f05-93c1-c6ef87b0b237 for this chassis.
Jan 26 13:30:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:13Z|00109|binding|INFO|fba64aff-9582-4f05-93c1-c6ef87b0b237: Claiming fa:16:3e:f4:cb:52 10.100.0.13
Jan 26 13:30:13 np0005596060 systemd-udevd[284620]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.652 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.656 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.661 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:cb:52 10.100.0.13'], port_security=['fa:16:3e:f4:cb:52 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d45a4b4-37e9-4b54-9a0a-2197d41d528a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a8e8029f3ed448ea8965530e4aef753', 'neutron:revision_number': '2', 'neutron:security_group_ids': '433fd554-99ec-4e91-930a-6083c4ce4aa3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8982920-67c2-42f8-b3a8-c7528e2fb577, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=fba64aff-9582-4f05-93c1-c6ef87b0b237) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.662 159331 INFO neutron.agent.ovn.metadata.agent [-] Port fba64aff-9582-4f05-93c1-c6ef87b0b237 in datapath 6d45a4b4-37e9-4b54-9a0a-2197d41d528a bound to our chassis#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.663 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d45a4b4-37e9-4b54-9a0a-2197d41d528a#033[00m
Jan 26 13:30:13 np0005596060 NetworkManager[48900]: <info>  [1769452213.6694] device (tapfba64aff-95): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:30:13 np0005596060 NetworkManager[48900]: <info>  [1769452213.6701] device (tapfba64aff-95): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.676 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0083ea1a-4566-45c1-a2f3-63f02b586df9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.677 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d45a4b4-31 in ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.678 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d45a4b4-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.679 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0a73f51c-2f0c-40cd-b5f2-9331f2e4d417]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.679 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[161948c8-ffbc-415a-ae70-46f875fedf62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 systemd-machined[213879]: New machine qemu-9-instance-00000014.
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.692 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[70448730-c4c1-4f7b-adb3-e07eee91bd61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 systemd[1]: Started Virtual Machine qemu-9-instance-00000014.
Jan 26 13:30:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:13Z|00110|binding|INFO|Setting lport fba64aff-9582-4f05-93c1-c6ef87b0b237 ovn-installed in OVS
Jan 26 13:30:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:13Z|00111|binding|INFO|Setting lport fba64aff-9582-4f05-93c1-c6ef87b0b237 up in Southbound
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.720 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e976a8c6-e5d3-4503-87dc-4c0531e87870]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.721 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.749 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[14dade5a-6d26-465c-b63c-02981b110376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.754 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[2135f4ca-40a3-4ecb-b20e-fcf3469876fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 NetworkManager[48900]: <info>  [1769452213.7550] manager: (tap6d45a4b4-30): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Jan 26 13:30:13 np0005596060 systemd-udevd[284624]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.784 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[f21548c9-240b-4979-9e2a-49965d0bd03e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.787 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f634d1-c5ac-466f-9ad0-f3254535c470]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 NetworkManager[48900]: <info>  [1769452213.8092] device (tap6d45a4b4-30): carrier: link connected
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.813 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[75aeed51-ea80-4f7f-8c60-4047e8867e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.830 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[6b36927e-a35c-4d22-ae2f-200a39c8e9b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d45a4b4-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:52:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597839, 'reachable_time': 21296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284656, 'error': None, 'target': 'ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.846 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8c9021-211a-44ba-bd71-92b47eb88463]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:52ce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597839, 'tstamp': 597839}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284657, 'error': None, 'target': 'ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.863 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[edf12ba6-91c2-4b5d-bbff-723e84b3c783]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d45a4b4-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:52:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597839, 'reachable_time': 21296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284658, 'error': None, 'target': 'ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.893 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0bee88-cf74-498c-9a02-f1978332baa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.953 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebb6ac6-1000-4168-bcc3-bd81e5df339b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.955 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d45a4b4-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.955 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.955 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d45a4b4-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.957 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:13 np0005596060 NetworkManager[48900]: <info>  [1769452213.9576] manager: (tap6d45a4b4-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Jan 26 13:30:13 np0005596060 kernel: tap6d45a4b4-30: entered promiscuous mode
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.959 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.960 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d45a4b4-30, col_values=(('external_ids', {'iface-id': '4e2db7ab-7b6e-4c95-8db9-10901ea92f65'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.961 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:13Z|00112|binding|INFO|Releasing lport 4e2db7ab-7b6e-4c95-8db9-10901ea92f65 from this chassis (sb_readonly=0)
Jan 26 13:30:13 np0005596060 nova_compute[247421]: 2026-01-26 18:30:13.975 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.976 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d45a4b4-37e9-4b54-9a0a-2197d41d528a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d45a4b4-37e9-4b54-9a0a-2197d41d528a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.976 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f0137541-f4d1-40ff-a1f8-ad8a7f0fa168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.977 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-6d45a4b4-37e9-4b54-9a0a-2197d41d528a
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/6d45a4b4-37e9-4b54-9a0a-2197d41d528a.pid.haproxy
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 6d45a4b4-37e9-4b54-9a0a-2197d41d528a
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:30:13 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:13.978 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a', 'env', 'PROCESS_TAG=haproxy-6d45a4b4-37e9-4b54-9a0a-2197d41d528a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d45a4b4-37e9-4b54-9a0a-2197d41d528a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.024 247428 DEBUG nova.compute.manager [req-6c9ccf31-c63b-4143-a47c-cfc1b1199f4a req-ba1b3482-1197-47f1-9cb9-eb0d7e274af7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received event network-vif-plugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.024 247428 DEBUG oslo_concurrency.lockutils [req-6c9ccf31-c63b-4143-a47c-cfc1b1199f4a req-ba1b3482-1197-47f1-9cb9-eb0d7e274af7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.025 247428 DEBUG oslo_concurrency.lockutils [req-6c9ccf31-c63b-4143-a47c-cfc1b1199f4a req-ba1b3482-1197-47f1-9cb9-eb0d7e274af7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.025 247428 DEBUG oslo_concurrency.lockutils [req-6c9ccf31-c63b-4143-a47c-cfc1b1199f4a req-ba1b3482-1197-47f1-9cb9-eb0d7e274af7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.025 247428 DEBUG nova.compute.manager [req-6c9ccf31-c63b-4143-a47c-cfc1b1199f4a req-ba1b3482-1197-47f1-9cb9-eb0d7e274af7 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Processing event network-vif-plugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:30:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:30:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:30:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:30:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:30:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:30:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:30:14 np0005596060 podman[284691]: 2026-01-26 18:30:14.311876069 +0000 UTC m=+0.021350095 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:30:14 np0005596060 podman[284691]: 2026-01-26 18:30:14.417039084 +0000 UTC m=+0.126513100 container create f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:30:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 26 13:30:14 np0005596060 systemd[1]: Started libpod-conmon-f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd.scope.
Jan 26 13:30:14 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:30:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f12dc2cc1c9a11a3368bcf0b5663b648a266e2603a6b8e1843da7d4830048182/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.696 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452214.6958208, 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.697 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] VM Started (Lifecycle Event)#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.699 247428 DEBUG nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.703 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.706 247428 INFO nova.virt.libvirt.driver [-] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Instance spawned successfully.#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.706 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:30:14 np0005596060 podman[284691]: 2026-01-26 18:30:14.716307417 +0000 UTC m=+0.425781443 container init f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:30:14 np0005596060 podman[284691]: 2026-01-26 18:30:14.721872759 +0000 UTC m=+0.431346755 container start f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.730 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.734 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:30:14 np0005596060 neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a[284748]: [NOTICE]   (284752) : New worker (284754) forked
Jan 26 13:30:14 np0005596060 neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a[284748]: [NOTICE]   (284752) : Loading success.
Jan 26 13:30:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:14.757 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:14.758 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:14.758 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.777 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.778 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452214.6960435, 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.778 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.788 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.789 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.789 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.790 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.790 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.791 247428 DEBUG nova.virt.libvirt.driver [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.825 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.828 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452214.7025595, 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.828 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.862 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.866 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.887 247428 INFO nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Took 11.00 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.888 247428 DEBUG nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.921 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.959 247428 INFO nova.compute.manager [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Took 17.27 seconds to build instance.#033[00m
Jan 26 13:30:14 np0005596060 nova_compute[247421]: 2026-01-26 18:30:14.978 247428 DEBUG oslo_concurrency.lockutils [None req-cd405c9b-355b-42c1-ae40-a26d1b74cf73 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:15.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:15.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:15 np0005596060 nova_compute[247421]: 2026-01-26 18:30:15.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:30:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 26 13:30:16 np0005596060 nova_compute[247421]: 2026-01-26 18:30:16.863 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:17 np0005596060 nova_compute[247421]: 2026-01-26 18:30:17.031 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:17.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:17 np0005596060 nova_compute[247421]: 2026-01-26 18:30:17.607 247428 DEBUG nova.compute.manager [req-4f32d11e-e71f-4a5c-b57c-becb39697573 req-e636903d-f37b-46a3-a60b-8982e9ceb4ed 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received event network-vif-plugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:30:17 np0005596060 nova_compute[247421]: 2026-01-26 18:30:17.608 247428 DEBUG oslo_concurrency.lockutils [req-4f32d11e-e71f-4a5c-b57c-becb39697573 req-e636903d-f37b-46a3-a60b-8982e9ceb4ed 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:17 np0005596060 nova_compute[247421]: 2026-01-26 18:30:17.608 247428 DEBUG oslo_concurrency.lockutils [req-4f32d11e-e71f-4a5c-b57c-becb39697573 req-e636903d-f37b-46a3-a60b-8982e9ceb4ed 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:17 np0005596060 nova_compute[247421]: 2026-01-26 18:30:17.608 247428 DEBUG oslo_concurrency.lockutils [req-4f32d11e-e71f-4a5c-b57c-becb39697573 req-e636903d-f37b-46a3-a60b-8982e9ceb4ed 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:17 np0005596060 nova_compute[247421]: 2026-01-26 18:30:17.608 247428 DEBUG nova.compute.manager [req-4f32d11e-e71f-4a5c-b57c-becb39697573 req-e636903d-f37b-46a3-a60b-8982e9ceb4ed 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] No waiting events found dispatching network-vif-plugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:30:17 np0005596060 nova_compute[247421]: 2026-01-26 18:30:17.608 247428 WARNING nova.compute.manager [req-4f32d11e-e71f-4a5c-b57c-becb39697573 req-e636903d-f37b-46a3-a60b-8982e9ceb4ed 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received unexpected event network-vif-plugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 for instance with vm_state active and task_state None.#033[00m
Jan 26 13:30:17 np0005596060 nova_compute[247421]: 2026-01-26 18:30:17.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:30:17 np0005596060 nova_compute[247421]: 2026-01-26 18:30:17.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:30:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 134 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 843 KiB/s wr, 162 op/s
Jan 26 13:30:18 np0005596060 nova_compute[247421]: 2026-01-26 18:30:18.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:30:18 np0005596060 nova_compute[247421]: 2026-01-26 18:30:18.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:30:18 np0005596060 nova_compute[247421]: 2026-01-26 18:30:18.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:30:19 np0005596060 nova_compute[247421]: 2026-01-26 18:30:19.114 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:30:19 np0005596060 nova_compute[247421]: 2026-01-26 18:30:19.114 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquired lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:30:19 np0005596060 nova_compute[247421]: 2026-01-26 18:30:19.114 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 26 13:30:19 np0005596060 nova_compute[247421]: 2026-01-26 18:30:19.114 247428 DEBUG nova.objects.instance [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:30:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:19.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:19.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 134 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 26 KiB/s wr, 146 op/s
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.091 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:21 np0005596060 NetworkManager[48900]: <info>  [1769452221.0919] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Jan 26 13:30:21 np0005596060 NetworkManager[48900]: <info>  [1769452221.0932] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 26 13:30:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:30:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:21.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:30:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:21.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.189 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updating instance_info_cache with network_info: [{"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.302 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:21 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:21Z|00113|binding|INFO|Releasing lport 4e2db7ab-7b6e-4c95-8db9-10901ea92f65 from this chassis (sb_readonly=0)
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.326 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.742 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Releasing lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.743 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.744 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.744 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.793 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.793 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.794 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.794 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.795 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:30:21 np0005596060 nova_compute[247421]: 2026-01-26 18:30:21.866 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.020 247428 DEBUG nova.compute.manager [req-c41ff3f2-a161-40e0-bc18-c956dabd62b3 req-0712df3c-8409-49ad-9765-385699904508 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received event network-changed-fba64aff-9582-4f05-93c1-c6ef87b0b237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.021 247428 DEBUG nova.compute.manager [req-c41ff3f2-a161-40e0-bc18-c956dabd62b3 req-0712df3c-8409-49ad-9765-385699904508 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Refreshing instance network info cache due to event network-changed-fba64aff-9582-4f05-93c1-c6ef87b0b237. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.022 247428 DEBUG oslo_concurrency.lockutils [req-c41ff3f2-a161-40e0-bc18-c956dabd62b3 req-0712df3c-8409-49ad-9765-385699904508 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.022 247428 DEBUG oslo_concurrency.lockutils [req-c41ff3f2-a161-40e0-bc18-c956dabd62b3 req-0712df3c-8409-49ad-9765-385699904508 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.022 247428 DEBUG nova.network.neutron [req-c41ff3f2-a161-40e0-bc18-c956dabd62b3 req-0712df3c-8409-49ad-9765-385699904508 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Refreshing network info cache for port fba64aff-9582-4f05-93c1-c6ef87b0b237 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.033 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.187 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:30:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4212426150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.274 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:30:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:30:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130761419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:30:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:30:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130761419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:30:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 177 op/s
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.584 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.585 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.730 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.732 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4637MB free_disk=20.967235565185547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.732 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.733 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.925 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.926 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.926 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:30:22 np0005596060 nova_compute[247421]: 2026-01-26 18:30:22.969 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:30:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:30:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:23.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:30:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:23.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:30:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/767081027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:30:23 np0005596060 nova_compute[247421]: 2026-01-26 18:30:23.648 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:30:23 np0005596060 nova_compute[247421]: 2026-01-26 18:30:23.653 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:30:24 np0005596060 nova_compute[247421]: 2026-01-26 18:30:24.025 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:30:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 167 op/s
Jan 26 13:30:24 np0005596060 nova_compute[247421]: 2026-01-26 18:30:24.869 247428 DEBUG nova.network.neutron [req-c41ff3f2-a161-40e0-bc18-c956dabd62b3 req-0712df3c-8409-49ad-9765-385699904508 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updated VIF entry in instance network info cache for port fba64aff-9582-4f05-93c1-c6ef87b0b237. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:30:24 np0005596060 nova_compute[247421]: 2026-01-26 18:30:24.869 247428 DEBUG nova.network.neutron [req-c41ff3f2-a161-40e0-bc18-c956dabd62b3 req-0712df3c-8409-49ad-9765-385699904508 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updating instance_info_cache with network_info: [{"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:30:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:25.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:25.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:30:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:30:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:30:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:30:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:30:25 np0005596060 nova_compute[247421]: 2026-01-26 18:30:25.855 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:30:25 np0005596060 nova_compute[247421]: 2026-01-26 18:30:25.855 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:26 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:30:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:30:26 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev f1afdc22-3108-4457-a3fc-5c68173d3930 does not exist
Jan 26 13:30:26 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b0143233-045d-4713-915b-0dafd67b9792 does not exist
Jan 26 13:30:26 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b1e0d45a-7548-43ad-98eb-f5c188c249e7 does not exist
Jan 26 13:30:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:30:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:30:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:30:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:30:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:30:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:30:26 np0005596060 nova_compute[247421]: 2026-01-26 18:30:26.340 247428 DEBUG oslo_concurrency.lockutils [req-c41ff3f2-a161-40e0-bc18-c956dabd62b3 req-0712df3c-8409-49ad-9765-385699904508 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:30:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 134 MiB data, 319 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 191 KiB/s wr, 183 op/s
Jan 26 13:30:26 np0005596060 nova_compute[247421]: 2026-01-26 18:30:26.883 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:26 np0005596060 podman[285132]: 2026-01-26 18:30:26.902519696 +0000 UTC m=+0.027260714 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:30:27 np0005596060 nova_compute[247421]: 2026-01-26 18:30:27.035 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:27.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:27.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:27 np0005596060 podman[285132]: 2026-01-26 18:30:27.230457479 +0000 UTC m=+0.355198467 container create bc083ec79c0f11e0826095359a5e0f90ad88af62a6ba0a71442715dafbf77769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dhawan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:30:27 np0005596060 systemd[1]: Started libpod-conmon-bc083ec79c0f11e0826095359a5e0f90ad88af62a6ba0a71442715dafbf77769.scope.
Jan 26 13:30:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:30:27 np0005596060 podman[285132]: 2026-01-26 18:30:27.768372252 +0000 UTC m=+0.893113330 container init bc083ec79c0f11e0826095359a5e0f90ad88af62a6ba0a71442715dafbf77769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dhawan, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:30:27 np0005596060 podman[285132]: 2026-01-26 18:30:27.784277307 +0000 UTC m=+0.909018295 container start bc083ec79c0f11e0826095359a5e0f90ad88af62a6ba0a71442715dafbf77769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dhawan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:30:27 np0005596060 pensive_dhawan[285149]: 167 167
Jan 26 13:30:27 np0005596060 systemd[1]: libpod-bc083ec79c0f11e0826095359a5e0f90ad88af62a6ba0a71442715dafbf77769.scope: Deactivated successfully.
Jan 26 13:30:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:30:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:30:27 np0005596060 podman[285132]: 2026-01-26 18:30:27.88580758 +0000 UTC m=+1.010548578 container attach bc083ec79c0f11e0826095359a5e0f90ad88af62a6ba0a71442715dafbf77769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dhawan, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 13:30:27 np0005596060 podman[285132]: 2026-01-26 18:30:27.886736253 +0000 UTC m=+1.011477271 container died bc083ec79c0f11e0826095359a5e0f90ad88af62a6ba0a71442715dafbf77769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dhawan, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 13:30:28 np0005596060 systemd[1]: var-lib-containers-storage-overlay-50c211b0f45bba7e60dd67e0efca7bd5c26182dd99a1b28d2719e1272aad8e42-merged.mount: Deactivated successfully.
Jan 26 13:30:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 142 MiB data, 340 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 851 KiB/s wr, 134 op/s
Jan 26 13:30:28 np0005596060 podman[285132]: 2026-01-26 18:30:28.573414241 +0000 UTC m=+1.698155229 container remove bc083ec79c0f11e0826095359a5e0f90ad88af62a6ba0a71442715dafbf77769 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:30:28 np0005596060 systemd[1]: libpod-conmon-bc083ec79c0f11e0826095359a5e0f90ad88af62a6ba0a71442715dafbf77769.scope: Deactivated successfully.
Jan 26 13:30:28 np0005596060 podman[285175]: 2026-01-26 18:30:28.734552619 +0000 UTC m=+0.044880742 container create c1877775e0346c54651cc5cbbac66303696fd84ee906860d08e2c634ef398f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:30:28 np0005596060 systemd[1]: Started libpod-conmon-c1877775e0346c54651cc5cbbac66303696fd84ee906860d08e2c634ef398f9c.scope.
Jan 26 13:30:28 np0005596060 podman[285175]: 2026-01-26 18:30:28.717072655 +0000 UTC m=+0.027400798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:30:28 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:30:28 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278476ccb8d0a4fd5ae9f17af898acd0eed931867f8d1e8baa47e200f8a551b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:28 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278476ccb8d0a4fd5ae9f17af898acd0eed931867f8d1e8baa47e200f8a551b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:28 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278476ccb8d0a4fd5ae9f17af898acd0eed931867f8d1e8baa47e200f8a551b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:28 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278476ccb8d0a4fd5ae9f17af898acd0eed931867f8d1e8baa47e200f8a551b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:28 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/278476ccb8d0a4fd5ae9f17af898acd0eed931867f8d1e8baa47e200f8a551b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:28 np0005596060 podman[285175]: 2026-01-26 18:30:28.842143216 +0000 UTC m=+0.152471369 container init c1877775e0346c54651cc5cbbac66303696fd84ee906860d08e2c634ef398f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 13:30:28 np0005596060 podman[285175]: 2026-01-26 18:30:28.851300129 +0000 UTC m=+0.161628252 container start c1877775e0346c54651cc5cbbac66303696fd84ee906860d08e2c634ef398f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:30:28 np0005596060 podman[285175]: 2026-01-26 18:30:28.855770753 +0000 UTC m=+0.166098896 container attach c1877775e0346c54651cc5cbbac66303696fd84ee906860d08e2c634ef398f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:30:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:30:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:29.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:30:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:29.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:29 np0005596060 infallible_leakey[285192]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:30:29 np0005596060 infallible_leakey[285192]: --> relative data size: 1.0
Jan 26 13:30:29 np0005596060 infallible_leakey[285192]: --> All data devices are unavailable
Jan 26 13:30:29 np0005596060 systemd[1]: libpod-c1877775e0346c54651cc5cbbac66303696fd84ee906860d08e2c634ef398f9c.scope: Deactivated successfully.
Jan 26 13:30:29 np0005596060 podman[285175]: 2026-01-26 18:30:29.736990971 +0000 UTC m=+1.047319114 container died c1877775e0346c54651cc5cbbac66303696fd84ee906860d08e2c634ef398f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:30:29 np0005596060 systemd[1]: var-lib-containers-storage-overlay-278476ccb8d0a4fd5ae9f17af898acd0eed931867f8d1e8baa47e200f8a551b2-merged.mount: Deactivated successfully.
Jan 26 13:30:30 np0005596060 podman[285175]: 2026-01-26 18:30:30.070250128 +0000 UTC m=+1.380578251 container remove c1877775e0346c54651cc5cbbac66303696fd84ee906860d08e2c634ef398f9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:30:30 np0005596060 systemd[1]: libpod-conmon-c1877775e0346c54651cc5cbbac66303696fd84ee906860d08e2c634ef398f9c.scope: Deactivated successfully.
Jan 26 13:30:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 142 MiB data, 340 MiB used, 21 GiB / 21 GiB avail; 489 KiB/s rd, 827 KiB/s wr, 58 op/s
Jan 26 13:30:30 np0005596060 podman[285364]: 2026-01-26 18:30:30.733526642 +0000 UTC m=+0.029550642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:30:30 np0005596060 podman[285364]: 2026-01-26 18:30:30.851318879 +0000 UTC m=+0.147342879 container create 0ade91a3d1619dc76e776fb8fc7c10035a0e89f70079eae38c906aee7b49d997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pascal, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:30:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:31.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:31.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:31 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:31Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:cb:52 10.100.0.13
Jan 26 13:30:31 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:31Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:cb:52 10.100.0.13
Jan 26 13:30:31 np0005596060 systemd[1]: Started libpod-conmon-0ade91a3d1619dc76e776fb8fc7c10035a0e89f70079eae38c906aee7b49d997.scope.
Jan 26 13:30:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:30:31 np0005596060 podman[285364]: 2026-01-26 18:30:31.363939169 +0000 UTC m=+0.659963209 container init 0ade91a3d1619dc76e776fb8fc7c10035a0e89f70079eae38c906aee7b49d997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 26 13:30:31 np0005596060 podman[285364]: 2026-01-26 18:30:31.372444626 +0000 UTC m=+0.668468616 container start 0ade91a3d1619dc76e776fb8fc7c10035a0e89f70079eae38c906aee7b49d997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 26 13:30:31 np0005596060 busy_pascal[285380]: 167 167
Jan 26 13:30:31 np0005596060 systemd[1]: libpod-0ade91a3d1619dc76e776fb8fc7c10035a0e89f70079eae38c906aee7b49d997.scope: Deactivated successfully.
Jan 26 13:30:31 np0005596060 conmon[285380]: conmon 0ade91a3d1619dc76e77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ade91a3d1619dc76e776fb8fc7c10035a0e89f70079eae38c906aee7b49d997.scope/container/memory.events
Jan 26 13:30:31 np0005596060 podman[285364]: 2026-01-26 18:30:31.412130705 +0000 UTC m=+0.708155065 container attach 0ade91a3d1619dc76e776fb8fc7c10035a0e89f70079eae38c906aee7b49d997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:30:31 np0005596060 podman[285364]: 2026-01-26 18:30:31.412999738 +0000 UTC m=+0.709023738 container died 0ade91a3d1619dc76e776fb8fc7c10035a0e89f70079eae38c906aee7b49d997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pascal, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:30:31 np0005596060 systemd[1]: var-lib-containers-storage-overlay-435e63340df3dfd143495d262a0b2005b6f52a890a9c12111df170d0f4195edd-merged.mount: Deactivated successfully.
Jan 26 13:30:31 np0005596060 podman[285364]: 2026-01-26 18:30:31.652532401 +0000 UTC m=+0.948556381 container remove 0ade91a3d1619dc76e776fb8fc7c10035a0e89f70079eae38c906aee7b49d997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 13:30:31 np0005596060 systemd[1]: libpod-conmon-0ade91a3d1619dc76e776fb8fc7c10035a0e89f70079eae38c906aee7b49d997.scope: Deactivated successfully.
Jan 26 13:30:31 np0005596060 nova_compute[247421]: 2026-01-26 18:30:31.885 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:31 np0005596060 podman[285406]: 2026-01-26 18:30:31.854501849 +0000 UTC m=+0.032283562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:30:32 np0005596060 podman[285406]: 2026-01-26 18:30:32.020158104 +0000 UTC m=+0.197939807 container create 0d9e3fac3015d191f1282907643a2ee4e3fcfe143536bc0c959a2c2129b311a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rhodes, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:30:32 np0005596060 nova_compute[247421]: 2026-01-26 18:30:32.038 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:32 np0005596060 systemd[1]: Started libpod-conmon-0d9e3fac3015d191f1282907643a2ee4e3fcfe143536bc0c959a2c2129b311a1.scope.
Jan 26 13:30:32 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:30:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea80639b7e53782830770b8802093d3be4a532f14f9ea90a5d45009c73f31299/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea80639b7e53782830770b8802093d3be4a532f14f9ea90a5d45009c73f31299/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea80639b7e53782830770b8802093d3be4a532f14f9ea90a5d45009c73f31299/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:32 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea80639b7e53782830770b8802093d3be4a532f14f9ea90a5d45009c73f31299/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:32 np0005596060 podman[285406]: 2026-01-26 18:30:32.178774729 +0000 UTC m=+0.356556502 container init 0d9e3fac3015d191f1282907643a2ee4e3fcfe143536bc0c959a2c2129b311a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 13:30:32 np0005596060 podman[285406]: 2026-01-26 18:30:32.188296821 +0000 UTC m=+0.366078524 container start 0d9e3fac3015d191f1282907643a2ee4e3fcfe143536bc0c959a2c2129b311a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:30:32 np0005596060 podman[285406]: 2026-01-26 18:30:32.193453232 +0000 UTC m=+0.371234935 container attach 0d9e3fac3015d191f1282907643a2ee4e3fcfe143536bc0c959a2c2129b311a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:30:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 151 MiB data, 347 MiB used, 21 GiB / 21 GiB avail; 931 KiB/s rd, 1.3 MiB/s wr, 112 op/s
Jan 26 13:30:32 np0005596060 nova_compute[247421]: 2026-01-26 18:30:32.631 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:32 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:32Z|00114|binding|INFO|Releasing lport 4e2db7ab-7b6e-4c95-8db9-10901ea92f65 from this chassis (sb_readonly=0)
Jan 26 13:30:33 np0005596060 nova_compute[247421]: 2026-01-26 18:30:33.048 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]: {
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:    "1": [
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:        {
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "devices": [
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "/dev/loop3"
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            ],
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "lv_name": "ceph_lv0",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "lv_size": "7511998464",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "name": "ceph_lv0",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "tags": {
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.cluster_name": "ceph",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.crush_device_class": "",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.encrypted": "0",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.osd_id": "1",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.type": "block",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:                "ceph.vdo": "0"
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            },
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "type": "block",
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:            "vg_name": "ceph_vg0"
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:        }
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]:    ]
Jan 26 13:30:33 np0005596060 xenodochial_rhodes[285422]: }
Jan 26 13:30:33 np0005596060 systemd[1]: libpod-0d9e3fac3015d191f1282907643a2ee4e3fcfe143536bc0c959a2c2129b311a1.scope: Deactivated successfully.
Jan 26 13:30:33 np0005596060 conmon[285422]: conmon 0d9e3fac3015d191f128 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d9e3fac3015d191f1282907643a2ee4e3fcfe143536bc0c959a2c2129b311a1.scope/container/memory.events
Jan 26 13:30:33 np0005596060 podman[285406]: 2026-01-26 18:30:33.110079421 +0000 UTC m=+1.287861084 container died 0d9e3fac3015d191f1282907643a2ee4e3fcfe143536bc0c959a2c2129b311a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 13:30:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:33.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:33.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:33 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ea80639b7e53782830770b8802093d3be4a532f14f9ea90a5d45009c73f31299-merged.mount: Deactivated successfully.
Jan 26 13:30:33 np0005596060 podman[285406]: 2026-01-26 18:30:33.292994614 +0000 UTC m=+1.470776277 container remove 0d9e3fac3015d191f1282907643a2ee4e3fcfe143536bc0c959a2c2129b311a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_rhodes, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:30:33 np0005596060 systemd[1]: libpod-conmon-0d9e3fac3015d191f1282907643a2ee4e3fcfe143536bc0c959a2c2129b311a1.scope: Deactivated successfully.
Jan 26 13:30:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Jan 26 13:30:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Jan 26 13:30:33 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Jan 26 13:30:33 np0005596060 podman[285586]: 2026-01-26 18:30:33.879264509 +0000 UTC m=+0.022961955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:30:33 np0005596060 podman[285586]: 2026-01-26 18:30:33.98385892 +0000 UTC m=+0.127556346 container create bdc09e470a7697b0fe2688710d27c8904f14ae83cb05c4e900b9dc8adeefd65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 13:30:34 np0005596060 systemd[1]: Started libpod-conmon-bdc09e470a7697b0fe2688710d27c8904f14ae83cb05c4e900b9dc8adeefd65a.scope.
Jan 26 13:30:34 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:30:34 np0005596060 podman[285586]: 2026-01-26 18:30:34.065934948 +0000 UTC m=+0.209632394 container init bdc09e470a7697b0fe2688710d27c8904f14ae83cb05c4e900b9dc8adeefd65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 13:30:34 np0005596060 podman[285586]: 2026-01-26 18:30:34.074732542 +0000 UTC m=+0.218429978 container start bdc09e470a7697b0fe2688710d27c8904f14ae83cb05c4e900b9dc8adeefd65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 13:30:34 np0005596060 eager_bohr[285602]: 167 167
Jan 26 13:30:34 np0005596060 systemd[1]: libpod-bdc09e470a7697b0fe2688710d27c8904f14ae83cb05c4e900b9dc8adeefd65a.scope: Deactivated successfully.
Jan 26 13:30:34 np0005596060 conmon[285602]: conmon bdc09e470a7697b0fe26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bdc09e470a7697b0fe2688710d27c8904f14ae83cb05c4e900b9dc8adeefd65a.scope/container/memory.events
Jan 26 13:30:34 np0005596060 podman[285586]: 2026-01-26 18:30:34.142154357 +0000 UTC m=+0.285851803 container attach bdc09e470a7697b0fe2688710d27c8904f14ae83cb05c4e900b9dc8adeefd65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 13:30:34 np0005596060 podman[285586]: 2026-01-26 18:30:34.14465923 +0000 UTC m=+0.288356676 container died bdc09e470a7697b0fe2688710d27c8904f14ae83cb05c4e900b9dc8adeefd65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:30:34 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d61c71a260b8fbcbcb36f30d700a2938d0748607c8567667d599bbde9007abea-merged.mount: Deactivated successfully.
Jan 26 13:30:34 np0005596060 podman[285586]: 2026-01-26 18:30:34.492008048 +0000 UTC m=+0.635705474 container remove bdc09e470a7697b0fe2688710d27c8904f14ae83cb05c4e900b9dc8adeefd65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 26 13:30:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 167 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 594 KiB/s rd, 2.6 MiB/s wr, 109 op/s
Jan 26 13:30:34 np0005596060 systemd[1]: libpod-conmon-bdc09e470a7697b0fe2688710d27c8904f14ae83cb05c4e900b9dc8adeefd65a.scope: Deactivated successfully.
Jan 26 13:30:34 np0005596060 podman[285627]: 2026-01-26 18:30:34.743538197 +0000 UTC m=+0.100066137 container create 86073cefa92acc2d747267ff88cc755156ed8d45d6cd32f952c151605ad99420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:30:34 np0005596060 podman[285627]: 2026-01-26 18:30:34.667961524 +0000 UTC m=+0.024489484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:30:34 np0005596060 systemd[1]: Started libpod-conmon-86073cefa92acc2d747267ff88cc755156ed8d45d6cd32f952c151605ad99420.scope.
Jan 26 13:30:34 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:30:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d3e3e940978c2d6df14468996a900515889dc54808038ee5363daa62872959/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d3e3e940978c2d6df14468996a900515889dc54808038ee5363daa62872959/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d3e3e940978c2d6df14468996a900515889dc54808038ee5363daa62872959/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:34 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54d3e3e940978c2d6df14468996a900515889dc54808038ee5363daa62872959/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:30:34 np0005596060 podman[285627]: 2026-01-26 18:30:34.945202626 +0000 UTC m=+0.301730606 container init 86073cefa92acc2d747267ff88cc755156ed8d45d6cd32f952c151605ad99420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:30:34 np0005596060 podman[285627]: 2026-01-26 18:30:34.95913541 +0000 UTC m=+0.315663350 container start 86073cefa92acc2d747267ff88cc755156ed8d45d6cd32f952c151605ad99420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:30:35 np0005596060 podman[285627]: 2026-01-26 18:30:35.071203991 +0000 UTC m=+0.427731941 container attach 86073cefa92acc2d747267ff88cc755156ed8d45d6cd32f952c151605ad99420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:30:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:35.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:30:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:35.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:30:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:35 np0005596060 heuristic_nash[285643]: {
Jan 26 13:30:35 np0005596060 heuristic_nash[285643]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:30:35 np0005596060 heuristic_nash[285643]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:30:35 np0005596060 heuristic_nash[285643]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:30:35 np0005596060 heuristic_nash[285643]:        "osd_id": 1,
Jan 26 13:30:35 np0005596060 heuristic_nash[285643]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:30:35 np0005596060 heuristic_nash[285643]:        "type": "bluestore"
Jan 26 13:30:35 np0005596060 heuristic_nash[285643]:    }
Jan 26 13:30:35 np0005596060 heuristic_nash[285643]: }
Jan 26 13:30:35 np0005596060 systemd[1]: libpod-86073cefa92acc2d747267ff88cc755156ed8d45d6cd32f952c151605ad99420.scope: Deactivated successfully.
Jan 26 13:30:35 np0005596060 podman[285627]: 2026-01-26 18:30:35.843280143 +0000 UTC m=+1.199808093 container died 86073cefa92acc2d747267ff88cc755156ed8d45d6cd32f952c151605ad99420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 13:30:36 np0005596060 systemd[1]: var-lib-containers-storage-overlay-54d3e3e940978c2d6df14468996a900515889dc54808038ee5363daa62872959-merged.mount: Deactivated successfully.
Jan 26 13:30:36 np0005596060 podman[285627]: 2026-01-26 18:30:36.244976232 +0000 UTC m=+1.601504172 container remove 86073cefa92acc2d747267ff88cc755156ed8d45d6cd32f952c151605ad99420 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_nash, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 13:30:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:30:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3026811644' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:30:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:30:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3026811644' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:30:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:30:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:30:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:30:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:30:36 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 2b5c6895-ab83-4dec-9f64-0f819260100f does not exist
Jan 26 13:30:36 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 628e9d40-8513-4268-ba5b-0d8ee223a9fd does not exist
Jan 26 13:30:36 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev c51ec84a-378b-4568-8451-17c890782298 does not exist
Jan 26 13:30:36 np0005596060 systemd[1]: libpod-conmon-86073cefa92acc2d747267ff88cc755156ed8d45d6cd32f952c151605ad99420.scope: Deactivated successfully.
Jan 26 13:30:36 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:30:36 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:30:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 151 MiB data, 341 MiB used, 21 GiB / 21 GiB avail; 586 KiB/s rd, 2.4 MiB/s wr, 113 op/s
Jan 26 13:30:36 np0005596060 nova_compute[247421]: 2026-01-26 18:30:36.887 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:37 np0005596060 nova_compute[247421]: 2026-01-26 18:30:37.039 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:37.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:37.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:37 np0005596060 nova_compute[247421]: 2026-01-26 18:30:37.183 247428 INFO nova.compute.manager [None req-c122b81e-37f8-42aa-a4c5-63c8493e3623 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Get console output#033[00m
Jan 26 13:30:37 np0005596060 nova_compute[247421]: 2026-01-26 18:30:37.189 247428 INFO oslo.privsep.daemon [None req-c122b81e-37f8-42aa-a4c5-63c8493e3623 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpqf4vfzvv/privsep.sock']#033[00m
Jan 26 13:30:37 np0005596060 nova_compute[247421]: 2026-01-26 18:30:37.875 247428 INFO oslo.privsep.daemon [None req-c122b81e-37f8-42aa-a4c5-63c8493e3623 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 26 13:30:37 np0005596060 nova_compute[247421]: 2026-01-26 18:30:37.747 285734 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 26 13:30:37 np0005596060 nova_compute[247421]: 2026-01-26 18:30:37.751 285734 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 26 13:30:37 np0005596060 nova_compute[247421]: 2026-01-26 18:30:37.753 285734 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 26 13:30:37 np0005596060 nova_compute[247421]: 2026-01-26 18:30:37.753 285734 INFO oslo.privsep.daemon [-] privsep daemon running as pid 285734#033[00m
Jan 26 13:30:37 np0005596060 nova_compute[247421]: 2026-01-26 18:30:37.975 285734 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 26 13:30:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 121 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 568 KiB/s rd, 1.6 MiB/s wr, 104 op/s
Jan 26 13:30:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:39.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:39.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:30:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3435964828' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:30:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:30:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3435964828' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:30:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Jan 26 13:30:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Jan 26 13:30:40 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Jan 26 13:30:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 121 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 1.3 MiB/s wr, 50 op/s
Jan 26 13:30:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:30:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:41.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:30:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:41.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:41 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:41Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:cb:52 10.100.0.13
Jan 26 13:30:41 np0005596060 podman[285788]: 2026-01-26 18:30:41.817483764 +0000 UTC m=+0.074410374 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 13:30:41 np0005596060 podman[285789]: 2026-01-26 18:30:41.827816737 +0000 UTC m=+0.084880470 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 13:30:41 np0005596060 nova_compute[247421]: 2026-01-26 18:30:41.889 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:42 np0005596060 nova_compute[247421]: 2026-01-26 18:30:42.041 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 121 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 43 op/s
Jan 26 13:30:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:43.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:43.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:30:44
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['backups', '.mgr', 'vms', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:30:44 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:44Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:cb:52 10.100.0.13
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 121 MiB data, 327 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 22 KiB/s wr, 39 op/s
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:30:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:30:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:45.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:45.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:45 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:45Z|00115|binding|INFO|Releasing lport 4e2db7ab-7b6e-4c95-8db9-10901ea92f65 from this chassis (sb_readonly=0)
Jan 26 13:30:45 np0005596060 nova_compute[247421]: 2026-01-26 18:30:45.307 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:45 np0005596060 nova_compute[247421]: 2026-01-26 18:30:45.733 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 141 MiB data, 335 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 816 KiB/s wr, 37 op/s
Jan 26 13:30:46 np0005596060 nova_compute[247421]: 2026-01-26 18:30:46.892 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:47 np0005596060 nova_compute[247421]: 2026-01-26 18:30:47.042 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:47.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:30:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:47.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:30:47 np0005596060 nova_compute[247421]: 2026-01-26 18:30:47.513 247428 DEBUG nova.compute.manager [req-10144ca3-770c-4d02-8d0e-d25838a7b66b req-7f90169a-eaad-4fa1-a0a1-59ce3b3ef494 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received event network-changed-fba64aff-9582-4f05-93c1-c6ef87b0b237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:30:47 np0005596060 nova_compute[247421]: 2026-01-26 18:30:47.514 247428 DEBUG nova.compute.manager [req-10144ca3-770c-4d02-8d0e-d25838a7b66b req-7f90169a-eaad-4fa1-a0a1-59ce3b3ef494 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Refreshing instance network info cache due to event network-changed-fba64aff-9582-4f05-93c1-c6ef87b0b237. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:30:47 np0005596060 nova_compute[247421]: 2026-01-26 18:30:47.514 247428 DEBUG oslo_concurrency.lockutils [req-10144ca3-770c-4d02-8d0e-d25838a7b66b req-7f90169a-eaad-4fa1-a0a1-59ce3b3ef494 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:30:47 np0005596060 nova_compute[247421]: 2026-01-26 18:30:47.514 247428 DEBUG oslo_concurrency.lockutils [req-10144ca3-770c-4d02-8d0e-d25838a7b66b req-7f90169a-eaad-4fa1-a0a1-59ce3b3ef494 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:30:47 np0005596060 nova_compute[247421]: 2026-01-26 18:30:47.514 247428 DEBUG nova.network.neutron [req-10144ca3-770c-4d02-8d0e-d25838a7b66b req-7f90169a-eaad-4fa1-a0a1-59ce3b3ef494 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Refreshing network info cache for port fba64aff-9582-4f05-93c1-c6ef87b0b237 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.159 247428 DEBUG oslo_concurrency.lockutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Acquiring lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.160 247428 DEBUG oslo_concurrency.lockutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.160 247428 DEBUG oslo_concurrency.lockutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Acquiring lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.160 247428 DEBUG oslo_concurrency.lockutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.161 247428 DEBUG oslo_concurrency.lockutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.162 247428 INFO nova.compute.manager [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Terminating instance#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.163 247428 DEBUG nova.compute.manager [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:30:48 np0005596060 kernel: tapfba64aff-95 (unregistering): left promiscuous mode
Jan 26 13:30:48 np0005596060 NetworkManager[48900]: <info>  [1769452248.2487] device (tapfba64aff-95): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.258 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:48 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:48Z|00116|binding|INFO|Releasing lport fba64aff-9582-4f05-93c1-c6ef87b0b237 from this chassis (sb_readonly=0)
Jan 26 13:30:48 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:48Z|00117|binding|INFO|Setting lport fba64aff-9582-4f05-93c1-c6ef87b0b237 down in Southbound
Jan 26 13:30:48 np0005596060 ovn_controller[148842]: 2026-01-26T18:30:48Z|00118|binding|INFO|Removing iface tapfba64aff-95 ovn-installed in OVS
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.262 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.282 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:48.325 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:30:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:48.326 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.325 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:48 np0005596060 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000014.scope: Deactivated successfully.
Jan 26 13:30:48 np0005596060 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000014.scope: Consumed 14.670s CPU time.
Jan 26 13:30:48 np0005596060 systemd-machined[213879]: Machine qemu-9-instance-00000014 terminated.
Jan 26 13:30:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:48.377 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:cb:52 10.100.0.13'], port_security=['fa:16:3e:f4:cb:52 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d45a4b4-37e9-4b54-9a0a-2197d41d528a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4a8e8029f3ed448ea8965530e4aef753', 'neutron:revision_number': '4', 'neutron:security_group_ids': '433fd554-99ec-4e91-930a-6083c4ce4aa3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8982920-67c2-42f8-b3a8-c7528e2fb577, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=fba64aff-9582-4f05-93c1-c6ef87b0b237) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:30:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:48.378 159331 INFO neutron.agent.ovn.metadata.agent [-] Port fba64aff-9582-4f05-93c1-c6ef87b0b237 in datapath 6d45a4b4-37e9-4b54-9a0a-2197d41d528a unbound from our chassis#033[00m
Jan 26 13:30:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:48.379 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d45a4b4-37e9-4b54-9a0a-2197d41d528a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:30:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:48.380 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[289b5d53-a127-46fd-a7c3-f5fdef705517]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:48.381 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a namespace which is not needed anymore#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.392 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.397 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.407 247428 INFO nova.virt.libvirt.driver [-] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Instance destroyed successfully.#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.408 247428 DEBUG nova.objects.instance [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lazy-loading 'resources' on Instance uuid 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.501 247428 DEBUG nova.virt.libvirt.vif [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:29:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1896144893',display_name='tempest-TestNetworkBasicOps-server-1896144893',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1896144893',id=20,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJD1GAAKPU4UHpAJNj9tg0PYZw2e+wWrjCNIOvavEeEaKY0ulKMjOCWkWjIlQ1hgErBA0KMJ7bweoI5ePPYuhwFVnIGmPDVd3/HS3123LXdTTSMOjBunUJcNc9vJtStXeA==',key_name='tempest-TestNetworkBasicOps-1236305140',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:30:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4a8e8029f3ed448ea8965530e4aef753',ramdisk_id='',reservation_id='r-1lo26kcz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1131412788',owner_user_name='tempest-TestNetworkBasicOps-1131412788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:30:14Z,user_data=None,user_id='3bff6d7161b14b1d98f063d24c52c0ca',uuid=536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.502 247428 DEBUG nova.network.os_vif_util [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Converting VIF {"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.503 247428 DEBUG nova.network.os_vif_util [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:cb:52,bridge_name='br-int',has_traffic_filtering=True,id=fba64aff-9582-4f05-93c1-c6ef87b0b237,network=Network(6d45a4b4-37e9-4b54-9a0a-2197d41d528a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba64aff-95') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.504 247428 DEBUG os_vif [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:cb:52,bridge_name='br-int',has_traffic_filtering=True,id=fba64aff-9582-4f05-93c1-c6ef87b0b237,network=Network(6d45a4b4-37e9-4b54-9a0a-2197d41d528a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba64aff-95') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.506 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.506 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba64aff-95, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.508 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.510 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:48 np0005596060 nova_compute[247421]: 2026-01-26 18:30:48.512 247428 INFO os_vif [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:cb:52,bridge_name='br-int',has_traffic_filtering=True,id=fba64aff-9582-4f05-93c1-c6ef87b0b237,network=Network(6d45a4b4-37e9-4b54-9a0a-2197d41d528a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfba64aff-95')#033[00m
Jan 26 13:30:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 167 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 26 13:30:48 np0005596060 neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a[284748]: [NOTICE]   (284752) : haproxy version is 2.8.14-c23fe91
Jan 26 13:30:48 np0005596060 neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a[284748]: [NOTICE]   (284752) : path to executable is /usr/sbin/haproxy
Jan 26 13:30:48 np0005596060 neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a[284748]: [WARNING]  (284752) : Exiting Master process...
Jan 26 13:30:48 np0005596060 neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a[284748]: [ALERT]    (284752) : Current worker (284754) exited with code 143 (Terminated)
Jan 26 13:30:48 np0005596060 neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a[284748]: [WARNING]  (284752) : All workers exited. Exiting... (0)
Jan 26 13:30:48 np0005596060 systemd[1]: libpod-f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd.scope: Deactivated successfully.
Jan 26 13:30:48 np0005596060 podman[285868]: 2026-01-26 18:30:48.703631134 +0000 UTC m=+0.213803050 container died f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 13:30:48 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f12dc2cc1c9a11a3368bcf0b5663b648a266e2603a6b8e1843da7d4830048182-merged.mount: Deactivated successfully.
Jan 26 13:30:48 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd-userdata-shm.mount: Deactivated successfully.
Jan 26 13:30:48 np0005596060 podman[285868]: 2026-01-26 18:30:48.959773601 +0000 UTC m=+0.469945507 container cleanup f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 13:30:48 np0005596060 systemd[1]: libpod-conmon-f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd.scope: Deactivated successfully.
Jan 26 13:30:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:49.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:49.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:49 np0005596060 podman[285914]: 2026-01-26 18:30:49.240590914 +0000 UTC m=+0.258568028 container remove f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:30:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:49.246 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[2f28bca6-e1f7-4443-a33b-a9d8b2faed36]: (4, ('Mon Jan 26 06:30:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a (f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd)\nf6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd\nMon Jan 26 06:30:48 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a (f6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd)\nf6aab4ca281a4a6b78d0df9cc4ff6c19def9171774fb201b50743bc946055ddd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:49.247 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c244d716-f0a3-443e-9d31-38e3bb738ca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:49.248 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d45a4b4-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:30:49 np0005596060 nova_compute[247421]: 2026-01-26 18:30:49.250 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:49 np0005596060 kernel: tap6d45a4b4-30: left promiscuous mode
Jan 26 13:30:49 np0005596060 nova_compute[247421]: 2026-01-26 18:30:49.264 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:49 np0005596060 nova_compute[247421]: 2026-01-26 18:30:49.264 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:49.267 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c242cd05-a05f-4d95-a825-14c179d1cc2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:49.280 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[2a3abef7-eb0b-423e-9cad-8654f8f723c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:49.281 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7d2ef0-df79-4020-80fa-18d55f516f84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:49.296 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f072817c-f8be-4c2e-b996-12b38efeb151]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597832, 'reachable_time': 30649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285929, 'error': None, 'target': 'ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:49 np0005596060 systemd[1]: run-netns-ovnmeta\x2d6d45a4b4\x2d37e9\x2d4b54\x2d9a0a\x2d2197d41d528a.mount: Deactivated successfully.
Jan 26 13:30:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:49.300 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d45a4b4-37e9-4b54-9a0a-2197d41d528a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:30:49 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:49.300 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[89dccff6-29fd-4a09-8388-8234501adb8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:30:50 np0005596060 nova_compute[247421]: 2026-01-26 18:30:50.011 247428 DEBUG nova.compute.manager [req-1a95b5c2-8c1e-416d-bba1-70fe9ebc1f9f req-eb42f0f5-baa0-4f6f-b011-01ebdb45f2cd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received event network-vif-unplugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:30:50 np0005596060 nova_compute[247421]: 2026-01-26 18:30:50.011 247428 DEBUG oslo_concurrency.lockutils [req-1a95b5c2-8c1e-416d-bba1-70fe9ebc1f9f req-eb42f0f5-baa0-4f6f-b011-01ebdb45f2cd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:50 np0005596060 nova_compute[247421]: 2026-01-26 18:30:50.011 247428 DEBUG oslo_concurrency.lockutils [req-1a95b5c2-8c1e-416d-bba1-70fe9ebc1f9f req-eb42f0f5-baa0-4f6f-b011-01ebdb45f2cd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:50 np0005596060 nova_compute[247421]: 2026-01-26 18:30:50.012 247428 DEBUG oslo_concurrency.lockutils [req-1a95b5c2-8c1e-416d-bba1-70fe9ebc1f9f req-eb42f0f5-baa0-4f6f-b011-01ebdb45f2cd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:50 np0005596060 nova_compute[247421]: 2026-01-26 18:30:50.012 247428 DEBUG nova.compute.manager [req-1a95b5c2-8c1e-416d-bba1-70fe9ebc1f9f req-eb42f0f5-baa0-4f6f-b011-01ebdb45f2cd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] No waiting events found dispatching network-vif-unplugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:30:50 np0005596060 nova_compute[247421]: 2026-01-26 18:30:50.012 247428 DEBUG nova.compute.manager [req-1a95b5c2-8c1e-416d-bba1-70fe9ebc1f9f req-eb42f0f5-baa0-4f6f-b011-01ebdb45f2cd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received event network-vif-unplugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:30:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 167 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 26 13:30:50 np0005596060 nova_compute[247421]: 2026-01-26 18:30:50.841 247428 INFO nova.virt.libvirt.driver [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Deleting instance files /var/lib/nova/instances/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_del#033[00m
Jan 26 13:30:50 np0005596060 nova_compute[247421]: 2026-01-26 18:30:50.842 247428 INFO nova.virt.libvirt.driver [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Deletion of /var/lib/nova/instances/536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2_del complete#033[00m
Jan 26 13:30:51 np0005596060 nova_compute[247421]: 2026-01-26 18:30:51.134 247428 INFO nova.compute.manager [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Took 2.97 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:30:51 np0005596060 nova_compute[247421]: 2026-01-26 18:30:51.134 247428 DEBUG oslo.service.loopingcall [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:30:51 np0005596060 nova_compute[247421]: 2026-01-26 18:30:51.135 247428 DEBUG nova.compute.manager [-] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:30:51 np0005596060 nova_compute[247421]: 2026-01-26 18:30:51.135 247428 DEBUG nova.network.neutron [-] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:30:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:51.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:51.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:51 np0005596060 nova_compute[247421]: 2026-01-26 18:30:51.893 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 116 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Jan 26 13:30:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:53.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:53.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:53 np0005596060 nova_compute[247421]: 2026-01-26 18:30:53.509 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:54 np0005596060 nova_compute[247421]: 2026-01-26 18:30:54.491 247428 DEBUG nova.compute.manager [req-0c7f6cad-7f48-4431-b05a-38d63b96b083 req-78e9b4be-0846-4ee0-970c-35ad4a667a10 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received event network-vif-plugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:30:54 np0005596060 nova_compute[247421]: 2026-01-26 18:30:54.492 247428 DEBUG oslo_concurrency.lockutils [req-0c7f6cad-7f48-4431-b05a-38d63b96b083 req-78e9b4be-0846-4ee0-970c-35ad4a667a10 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:54 np0005596060 nova_compute[247421]: 2026-01-26 18:30:54.492 247428 DEBUG oslo_concurrency.lockutils [req-0c7f6cad-7f48-4431-b05a-38d63b96b083 req-78e9b4be-0846-4ee0-970c-35ad4a667a10 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:54 np0005596060 nova_compute[247421]: 2026-01-26 18:30:54.492 247428 DEBUG oslo_concurrency.lockutils [req-0c7f6cad-7f48-4431-b05a-38d63b96b083 req-78e9b4be-0846-4ee0-970c-35ad4a667a10 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:54 np0005596060 nova_compute[247421]: 2026-01-26 18:30:54.492 247428 DEBUG nova.compute.manager [req-0c7f6cad-7f48-4431-b05a-38d63b96b083 req-78e9b4be-0846-4ee0-970c-35ad4a667a10 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] No waiting events found dispatching network-vif-plugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:30:54 np0005596060 nova_compute[247421]: 2026-01-26 18:30:54.492 247428 WARNING nova.compute.manager [req-0c7f6cad-7f48-4431-b05a-38d63b96b083 req-78e9b4be-0846-4ee0-970c-35ad4a667a10 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received unexpected event network-vif-plugged-fba64aff-9582-4f05-93c1-c6ef87b0b237 for instance with vm_state active and task_state deleting.#033[00m
Jan 26 13:30:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 419 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 26 13:30:54 np0005596060 nova_compute[247421]: 2026-01-26 18:30:54.769 247428 DEBUG nova.network.neutron [req-10144ca3-770c-4d02-8d0e-d25838a7b66b req-7f90169a-eaad-4fa1-a0a1-59ce3b3ef494 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updated VIF entry in instance network info cache for port fba64aff-9582-4f05-93c1-c6ef87b0b237. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:30:54 np0005596060 nova_compute[247421]: 2026-01-26 18:30:54.769 247428 DEBUG nova.network.neutron [req-10144ca3-770c-4d02-8d0e-d25838a7b66b req-7f90169a-eaad-4fa1-a0a1-59ce3b3ef494 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updating instance_info_cache with network_info: [{"id": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "address": "fa:16:3e:f4:cb:52", "network": {"id": "6d45a4b4-37e9-4b54-9a0a-2197d41d528a", "bridge": "br-int", "label": "tempest-network-smoke--1293491331", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4a8e8029f3ed448ea8965530e4aef753", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfba64aff-95", "ovs_interfaceid": "fba64aff-9582-4f05-93c1-c6ef87b0b237", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:30:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:30:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:55.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:30:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:55.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:55 np0005596060 nova_compute[247421]: 2026-01-26 18:30:55.422 247428 DEBUG oslo_concurrency.lockutils [req-10144ca3-770c-4d02-8d0e-d25838a7b66b req-7f90169a-eaad-4fa1-a0a1-59ce3b3ef494 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:30:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:30:56 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:30:56.328 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:30:56 np0005596060 nova_compute[247421]: 2026-01-26 18:30:56.452 247428 DEBUG nova.network.neutron [-] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:30:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 26 13:30:56 np0005596060 nova_compute[247421]: 2026-01-26 18:30:56.599 247428 DEBUG nova.compute.manager [req-0788425d-6fe9-44ce-bee1-f12276235409 req-41bb7843-6762-440f-986a-587fac8b3e31 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Received event network-vif-deleted-fba64aff-9582-4f05-93c1-c6ef87b0b237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:30:56 np0005596060 nova_compute[247421]: 2026-01-26 18:30:56.599 247428 INFO nova.compute.manager [req-0788425d-6fe9-44ce-bee1-f12276235409 req-41bb7843-6762-440f-986a-587fac8b3e31 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Neutron deleted interface fba64aff-9582-4f05-93c1-c6ef87b0b237; detaching it from the instance and deleting it from the info cache#033[00m
Jan 26 13:30:56 np0005596060 nova_compute[247421]: 2026-01-26 18:30:56.600 247428 DEBUG nova.network.neutron [req-0788425d-6fe9-44ce-bee1-f12276235409 req-41bb7843-6762-440f-986a-587fac8b3e31 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:30:56 np0005596060 nova_compute[247421]: 2026-01-26 18:30:56.601 247428 INFO nova.compute.manager [-] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Took 5.47 seconds to deallocate network for instance.#033[00m
Jan 26 13:30:56 np0005596060 nova_compute[247421]: 2026-01-26 18:30:56.683 247428 DEBUG nova.compute.manager [req-0788425d-6fe9-44ce-bee1-f12276235409 req-41bb7843-6762-440f-986a-587fac8b3e31 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Detach interface failed, port_id=fba64aff-9582-4f05-93c1-c6ef87b0b237, reason: Instance 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 26 13:30:56 np0005596060 nova_compute[247421]: 2026-01-26 18:30:56.894 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:57 np0005596060 nova_compute[247421]: 2026-01-26 18:30:57.144 247428 DEBUG oslo_concurrency.lockutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:30:57 np0005596060 nova_compute[247421]: 2026-01-26 18:30:57.145 247428 DEBUG oslo_concurrency.lockutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:30:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:57.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:57.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:57 np0005596060 nova_compute[247421]: 2026-01-26 18:30:57.242 247428 DEBUG oslo_concurrency.processutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:30:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:30:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3244049659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:30:57 np0005596060 nova_compute[247421]: 2026-01-26 18:30:57.684 247428 DEBUG oslo_concurrency.processutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:30:57 np0005596060 nova_compute[247421]: 2026-01-26 18:30:57.691 247428 DEBUG nova.compute.provider_tree [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:30:57 np0005596060 nova_compute[247421]: 2026-01-26 18:30:57.753 247428 DEBUG nova.scheduler.client.report [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:30:58 np0005596060 nova_compute[247421]: 2026-01-26 18:30:58.078 247428 DEBUG oslo_concurrency.lockutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:58 np0005596060 nova_compute[247421]: 2026-01-26 18:30:58.511 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:30:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.1 MiB/s wr, 43 op/s
Jan 26 13:30:58 np0005596060 nova_compute[247421]: 2026-01-26 18:30:58.620 247428 INFO nova.scheduler.client.report [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Deleted allocations for instance 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2#033[00m
Jan 26 13:30:59 np0005596060 nova_compute[247421]: 2026-01-26 18:30:59.099 247428 DEBUG oslo_concurrency.lockutils [None req-f8ff66c8-8941-4a63-b471-519f17638a13 3bff6d7161b14b1d98f063d24c52c0ca 4a8e8029f3ed448ea8965530e4aef753 - - default default] Lock "536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:30:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:30:59.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:30:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:30:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:30:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:30:59.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 13:31:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:01.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:01.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:01 np0005596060 nova_compute[247421]: 2026-01-26 18:31:01.961 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 13:31:02 np0005596060 nova_compute[247421]: 2026-01-26 18:31:02.935 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:31:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:03.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:31:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:03.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:03 np0005596060 nova_compute[247421]: 2026-01-26 18:31:03.405 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769452248.4042935, 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:31:03 np0005596060 nova_compute[247421]: 2026-01-26 18:31:03.405 247428 INFO nova.compute.manager [-] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:31:03 np0005596060 nova_compute[247421]: 2026-01-26 18:31:03.466 247428 DEBUG nova.compute.manager [None req-c9d281ae-204c-4042-a0b5-f1a885111cbb - - - - - -] [instance: 536fd38c-5fea-44c0-bd4f-9f1ae90f5bf2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:31:03 np0005596060 nova_compute[247421]: 2026-01-26 18:31:03.513 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:31:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:31:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail; 938 B/s rd, 0 B/s wr, 2 op/s
Jan 26 13:31:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:05.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:05.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:31:06 np0005596060 nova_compute[247421]: 2026-01-26 18:31:06.965 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:07.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:07.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:08 np0005596060 nova_compute[247421]: 2026-01-26 18:31:08.515 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:31:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:09.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:09.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:09 np0005596060 nova_compute[247421]: 2026-01-26 18:31:09.393 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:09 np0005596060 nova_compute[247421]: 2026-01-26 18:31:09.683 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:31:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:11.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:11.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:31:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 16K writes, 60K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 16K writes, 5236 syncs, 3.24 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3738 writes, 9852 keys, 3738 commit groups, 1.0 writes per commit group, ingest: 6.18 MB, 0.01 MB/s#012Interval WAL: 3738 writes, 1601 syncs, 2.33 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 13:31:11 np0005596060 nova_compute[247421]: 2026-01-26 18:31:11.969 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:31:12 np0005596060 podman[286016]: 2026-01-26 18:31:12.786068058 +0000 UTC m=+0.051707136 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 26 13:31:12 np0005596060 podman[286017]: 2026-01-26 18:31:12.847274145 +0000 UTC m=+0.111595880 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 13:31:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:13.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:31:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:31:13 np0005596060 nova_compute[247421]: 2026-01-26 18:31:13.517 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:31:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:31:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:31:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:31:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:31:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:31:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:31:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:14.759 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:31:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:14.759 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:31:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:14.759 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:31:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:31:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2120819056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:31:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:15.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:15.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:31:16 np0005596060 nova_compute[247421]: 2026-01-26 18:31:16.762 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:31:16 np0005596060 nova_compute[247421]: 2026-01-26 18:31:16.763 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:31:16 np0005596060 nova_compute[247421]: 2026-01-26 18:31:16.763 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:31:16 np0005596060 nova_compute[247421]: 2026-01-26 18:31:16.763 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:31:16 np0005596060 nova_compute[247421]: 2026-01-26 18:31:16.763 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:31:16 np0005596060 nova_compute[247421]: 2026-01-26 18:31:16.972 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:31:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:31:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:17.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:18 np0005596060 nova_compute[247421]: 2026-01-26 18:31:18.520 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:31:18 np0005596060 nova_compute[247421]: 2026-01-26 18:31:18.647 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:31:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:19.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.678 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.678 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.679 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.850 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.850 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.851 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.851 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:31:19 np0005596060 nova_compute[247421]: 2026-01-26 18:31:19.851 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.175 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.176 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.251 247428 DEBUG nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:31:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:31:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546716650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.294 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.361 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.361 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.369 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.370 247428 INFO nova.compute.claims [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.467 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.468 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4733MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.468 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.571 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:31:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:31:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:31:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2332563377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.989 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:31:20 np0005596060 nova_compute[247421]: 2026-01-26 18:31:20.995 247428 DEBUG nova.compute.provider_tree [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:31:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:21.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:21.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.353 247428 DEBUG nova.scheduler.client.report [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.392 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.392 247428 DEBUG nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.394 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.456 247428 DEBUG nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.457 247428 DEBUG nova.network.neutron [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.492 247428 INFO nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.541 247428 DEBUG nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.593 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance ebdb1528-b5f5-4593-8801-7a25fc358497 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.594 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.594 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.649 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.783 247428 DEBUG nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.785 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.785 247428 INFO nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Creating image(s)#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.826 247428 DEBUG nova.storage.rbd_utils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.856 247428 DEBUG nova.storage.rbd_utils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.884 247428 DEBUG nova.storage.rbd_utils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.888 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.949 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.950 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.951 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.952 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.984 247428 DEBUG nova.storage.rbd_utils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:31:21 np0005596060 nova_compute[247421]: 2026-01-26 18:31:21.988 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 ebdb1528-b5f5-4593-8801-7a25fc358497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:31:22 np0005596060 nova_compute[247421]: 2026-01-26 18:31:22.015 247428 DEBUG nova.policy [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6dd15a25d55a4c818b4f121ca4c79ac7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd2387917610d4d928d60d38ade9e3305', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 26 13:31:22 np0005596060 nova_compute[247421]: 2026-01-26 18:31:22.018 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:31:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4060533118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:31:22 np0005596060 nova_compute[247421]: 2026-01-26 18:31:22.140 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:31:22 np0005596060 nova_compute[247421]: 2026-01-26 18:31:22.146 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:31:22 np0005596060 nova_compute[247421]: 2026-01-26 18:31:22.380 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:31:22 np0005596060 nova_compute[247421]: 2026-01-26 18:31:22.480 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:31:22 np0005596060 nova_compute[247421]: 2026-01-26 18:31:22.481 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:31:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 88 MiB data, 320 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:31:23 np0005596060 nova_compute[247421]: 2026-01-26 18:31:23.061 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 ebdb1528-b5f5-4593-8801-7a25fc358497_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:31:23 np0005596060 nova_compute[247421]: 2026-01-26 18:31:23.135 247428 DEBUG nova.storage.rbd_utils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] resizing rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:31:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:23.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:23.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:23 np0005596060 nova_compute[247421]: 2026-01-26 18:31:23.387 247428 DEBUG nova.objects.instance [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'migration_context' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:31:23 np0005596060 nova_compute[247421]: 2026-01-26 18:31:23.413 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:31:23 np0005596060 nova_compute[247421]: 2026-01-26 18:31:23.414 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Ensure instance console log exists: /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:31:23 np0005596060 nova_compute[247421]: 2026-01-26 18:31:23.414 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:31:23 np0005596060 nova_compute[247421]: 2026-01-26 18:31:23.414 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:31:23 np0005596060 nova_compute[247421]: 2026-01-26 18:31:23.414 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:31:23 np0005596060 nova_compute[247421]: 2026-01-26 18:31:23.524 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:23 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] Check health
Jan 26 13:31:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 112 MiB data, 329 MiB used, 21 GiB / 21 GiB avail; 8.0 KiB/s rd, 776 KiB/s wr, 14 op/s
Jan 26 13:31:24 np0005596060 nova_compute[247421]: 2026-01-26 18:31:24.819 247428 DEBUG nova.network.neutron [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Successfully created port: ca62000c-903a-41ab-abeb-c6427e62fa46 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:31:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:25.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:25.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:25 np0005596060 nova_compute[247421]: 2026-01-26 18:31:25.453 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:31:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 120 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 622 KiB/s rd, 1.0 MiB/s wr, 38 op/s
Jan 26 13:31:26 np0005596060 nova_compute[247421]: 2026-01-26 18:31:26.975 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:27.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:31:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:27.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:31:28 np0005596060 nova_compute[247421]: 2026-01-26 18:31:28.374 247428 DEBUG nova.network.neutron [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Successfully updated port: ca62000c-903a-41ab-abeb-c6427e62fa46 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:31:28 np0005596060 nova_compute[247421]: 2026-01-26 18:31:28.556 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 26 13:31:28 np0005596060 nova_compute[247421]: 2026-01-26 18:31:28.768 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:31:28 np0005596060 nova_compute[247421]: 2026-01-26 18:31:28.768 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquired lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:31:28 np0005596060 nova_compute[247421]: 2026-01-26 18:31:28.769 247428 DEBUG nova.network.neutron [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:31:29 np0005596060 nova_compute[247421]: 2026-01-26 18:31:29.132 247428 DEBUG nova.network.neutron [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:31:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:29.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:31:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:29.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:31:29 np0005596060 nova_compute[247421]: 2026-01-26 18:31:29.944 247428 DEBUG nova.compute.manager [req-957194bf-c06d-4e9b-bb18-64c9edc31e24 req-30cad209-d235-4646-9705-d401f7c9b7c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-changed-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:31:29 np0005596060 nova_compute[247421]: 2026-01-26 18:31:29.944 247428 DEBUG nova.compute.manager [req-957194bf-c06d-4e9b-bb18-64c9edc31e24 req-30cad209-d235-4646-9705-d401f7c9b7c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Refreshing instance network info cache due to event network-changed-ca62000c-903a-41ab-abeb-c6427e62fa46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:31:29 np0005596060 nova_compute[247421]: 2026-01-26 18:31:29.944 247428 DEBUG oslo_concurrency.lockutils [req-957194bf-c06d-4e9b-bb18-64c9edc31e24 req-30cad209-d235-4646-9705-d401f7c9b7c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:31:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.646 247428 DEBUG nova.network.neutron [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.677 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Releasing lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.678 247428 DEBUG nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance network_info: |[{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.679 247428 DEBUG oslo_concurrency.lockutils [req-957194bf-c06d-4e9b-bb18-64c9edc31e24 req-30cad209-d235-4646-9705-d401f7c9b7c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.679 247428 DEBUG nova.network.neutron [req-957194bf-c06d-4e9b-bb18-64c9edc31e24 req-30cad209-d235-4646-9705-d401f7c9b7c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Refreshing network info cache for port ca62000c-903a-41ab-abeb-c6427e62fa46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.681 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Start _get_guest_xml network_info=[{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.685 247428 WARNING nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.692 247428 DEBUG nova.virt.libvirt.host [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.694 247428 DEBUG nova.virt.libvirt.host [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.703 247428 DEBUG nova.virt.libvirt.host [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.703 247428 DEBUG nova.virt.libvirt.host [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.705 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.705 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.705 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.705 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.706 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.706 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.706 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.706 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.706 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.707 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.707 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.707 247428 DEBUG nova.virt.hardware [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:31:30 np0005596060 nova_compute[247421]: 2026-01-26 18:31:30.710 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:30.919343) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452290919377, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1333, "num_deletes": 251, "total_data_size": 2197125, "memory_usage": 2248256, "flush_reason": "Manual Compaction"}
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452290928795, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1299951, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36696, "largest_seqno": 38028, "table_properties": {"data_size": 1295147, "index_size": 2136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13063, "raw_average_key_size": 21, "raw_value_size": 1284514, "raw_average_value_size": 2075, "num_data_blocks": 95, "num_entries": 619, "num_filter_entries": 619, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769452165, "oldest_key_time": 1769452165, "file_creation_time": 1769452290, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 9501 microseconds, and 4159 cpu microseconds.
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:30.928839) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1299951 bytes OK
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:30.928861) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:30.930259) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:30.930275) EVENT_LOG_v1 {"time_micros": 1769452290930270, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:30.930294) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2191311, prev total WAL file size 2191311, number of live WAL files 2.
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:30.931645) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1269KB)], [80(10MB)]
Jan 26 13:31:30 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452290931716, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 12480070, "oldest_snapshot_seqno": -1}
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6253 keys, 9503221 bytes, temperature: kUnknown
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452291006378, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 9503221, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9462612, "index_size": 23859, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 160793, "raw_average_key_size": 25, "raw_value_size": 9351245, "raw_average_value_size": 1495, "num_data_blocks": 958, "num_entries": 6253, "num_filter_entries": 6253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769452290, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:31.006665) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 9503221 bytes
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:31.007819) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.9 rd, 127.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 10.7 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(16.9) write-amplify(7.3) OK, records in: 6718, records dropped: 465 output_compression: NoCompression
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:31.007839) EVENT_LOG_v1 {"time_micros": 1769452291007828, "job": 46, "event": "compaction_finished", "compaction_time_micros": 74765, "compaction_time_cpu_micros": 27976, "output_level": 6, "num_output_files": 1, "total_output_size": 9503221, "num_input_records": 6718, "num_output_records": 6253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452291008205, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452291010727, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:30.931541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:31.010873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:31.010880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:31.010882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:31.010884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:31:31.010886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279222861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.184 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:31:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:31:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:31.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.209 247428 DEBUG nova.storage.rbd_utils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.213 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:31:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:31.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:31:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3516195438' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.697 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.699 247428 DEBUG nova.virt.libvirt.vif [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:31:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1465409842',display_name='tempest-TestShelveInstance-server-1465409842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1465409842',id=22,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBArPa0GPQW3updI5wEeWfHenCcjGPGWD88434ubT+vOQr3X0Eo9eIdeVp23Kl758az+2Tg1EnoD3gvKGqOjgjRSe43W1eqMdMcY+qIEIlduzaNHNym4w1xAu5VTrRKiBeQ==',key_name='tempest-TestShelveInstance-1450425907',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2387917610d4d928d60d38ade9e3305',ramdisk_id='',reservation_id='r-b1yl1dsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1084421254',owner_user_name='tempest-TestShelveInstance-1084421254-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:31:21Z,user_data=None,user_id='6dd15a25d55a4c818b4f121ca4c79ac7',uuid=ebdb1528-b5f5-4593-8801-7a25fc358497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.699 247428 DEBUG nova.network.os_vif_util [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converting VIF {"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.700 247428 DEBUG nova.network.os_vif_util [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.701 247428 DEBUG nova.objects.instance [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'pci_devices' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.896 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <uuid>ebdb1528-b5f5-4593-8801-7a25fc358497</uuid>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <name>instance-00000016</name>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestShelveInstance-server-1465409842</nova:name>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:31:30</nova:creationTime>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <nova:user uuid="6dd15a25d55a4c818b4f121ca4c79ac7">tempest-TestShelveInstance-1084421254-project-member</nova:user>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <nova:project uuid="d2387917610d4d928d60d38ade9e3305">tempest-TestShelveInstance-1084421254</nova:project>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <nova:port uuid="ca62000c-903a-41ab-abeb-c6427e62fa46">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <entry name="serial">ebdb1528-b5f5-4593-8801-7a25fc358497</entry>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <entry name="uuid">ebdb1528-b5f5-4593-8801-7a25fc358497</entry>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/ebdb1528-b5f5-4593-8801-7a25fc358497_disk">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:8b:ab:0c"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <target dev="tapca62000c-90"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/console.log" append="off"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:31:31 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:31:31 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:31:31 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:31:31 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.897 247428 DEBUG nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Preparing to wait for external event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.897 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.897 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.898 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.898 247428 DEBUG nova.virt.libvirt.vif [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:31:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1465409842',display_name='tempest-TestShelveInstance-server-1465409842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1465409842',id=22,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBArPa0GPQW3updI5wEeWfHenCcjGPGWD88434ubT+vOQr3X0Eo9eIdeVp23Kl758az+2Tg1EnoD3gvKGqOjgjRSe43W1eqMdMcY+qIEIlduzaNHNym4w1xAu5VTrRKiBeQ==',key_name='tempest-TestShelveInstance-1450425907',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d2387917610d4d928d60d38ade9e3305',ramdisk_id='',reservation_id='r-b1yl1dsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1084421254',owner_user_name='tempest-TestShelveInstance-1084421254-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:31:21Z,user_data=None,user_id='6dd15a25d55a4c818b4f121ca4c79ac7',uuid=ebdb1528-b5f5-4593-8801-7a25fc358497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.898 247428 DEBUG nova.network.os_vif_util [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converting VIF {"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.899 247428 DEBUG nova.network.os_vif_util [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.899 247428 DEBUG os_vif [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.900 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.900 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.900 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.904 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.904 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca62000c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.905 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca62000c-90, col_values=(('external_ids', {'iface-id': 'ca62000c-903a-41ab-abeb-c6427e62fa46', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:ab:0c', 'vm-uuid': 'ebdb1528-b5f5-4593-8801-7a25fc358497'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.906 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:31 np0005596060 NetworkManager[48900]: <info>  [1769452291.9071] manager: (tapca62000c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.908 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.916 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.916 247428 INFO os_vif [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90')#033[00m
Jan 26 13:31:31 np0005596060 nova_compute[247421]: 2026-01-26 18:31:31.977 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 13:31:32 np0005596060 nova_compute[247421]: 2026-01-26 18:31:32.717 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:31:32 np0005596060 nova_compute[247421]: 2026-01-26 18:31:32.718 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:31:32 np0005596060 nova_compute[247421]: 2026-01-26 18:31:32.718 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] No VIF found with MAC fa:16:3e:8b:ab:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:31:32 np0005596060 nova_compute[247421]: 2026-01-26 18:31:32.718 247428 INFO nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Using config drive#033[00m
Jan 26 13:31:32 np0005596060 nova_compute[247421]: 2026-01-26 18:31:32.739 247428 DEBUG nova.storage.rbd_utils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:31:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:31:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:33.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:31:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:33.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 13:31:34 np0005596060 nova_compute[247421]: 2026-01-26 18:31:34.975 247428 INFO nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Creating config drive at /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config#033[00m
Jan 26 13:31:34 np0005596060 nova_compute[247421]: 2026-01-26 18:31:34.981 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_tahy9s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:31:35 np0005596060 nova_compute[247421]: 2026-01-26 18:31:35.007 247428 DEBUG nova.network.neutron [req-957194bf-c06d-4e9b-bb18-64c9edc31e24 req-30cad209-d235-4646-9705-d401f7c9b7c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updated VIF entry in instance network info cache for port ca62000c-903a-41ab-abeb-c6427e62fa46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:31:35 np0005596060 nova_compute[247421]: 2026-01-26 18:31:35.008 247428 DEBUG nova.network.neutron [req-957194bf-c06d-4e9b-bb18-64c9edc31e24 req-30cad209-d235-4646-9705-d401f7c9b7c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:31:35 np0005596060 nova_compute[247421]: 2026-01-26 18:31:35.113 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_tahy9s" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:31:35 np0005596060 nova_compute[247421]: 2026-01-26 18:31:35.140 247428 DEBUG nova.storage.rbd_utils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:31:35 np0005596060 nova_compute[247421]: 2026-01-26 18:31:35.145 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:31:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:35.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:31:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:35.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:31:35 np0005596060 nova_compute[247421]: 2026-01-26 18:31:35.446 247428 DEBUG oslo_concurrency.lockutils [req-957194bf-c06d-4e9b-bb18-64c9edc31e24 req-30cad209-d235-4646-9705-d401f7c9b7c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:31:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.150 247428 DEBUG oslo_concurrency.processutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.005s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.151 247428 INFO nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Deleting local config drive /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config because it was imported into RBD.#033[00m
Jan 26 13:31:36 np0005596060 kernel: tapca62000c-90: entered promiscuous mode
Jan 26 13:31:36 np0005596060 NetworkManager[48900]: <info>  [1769452296.2089] manager: (tapca62000c-90): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Jan 26 13:31:36 np0005596060 ovn_controller[148842]: 2026-01-26T18:31:36Z|00119|binding|INFO|Claiming lport ca62000c-903a-41ab-abeb-c6427e62fa46 for this chassis.
Jan 26 13:31:36 np0005596060 ovn_controller[148842]: 2026-01-26T18:31:36Z|00120|binding|INFO|ca62000c-903a-41ab-abeb-c6427e62fa46: Claiming fa:16:3e:8b:ab:0c 10.100.0.9
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.211 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.214 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:36 np0005596060 systemd-udevd[286489]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:31:36 np0005596060 systemd-machined[213879]: New machine qemu-10-instance-00000016.
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.255 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:ab:0c 10.100.0.9'], port_security=['fa:16:3e:8b:ab:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ebdb1528-b5f5-4593-8801-7a25fc358497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de54f204-706b-4f67-80ee-0be6151f732b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2387917610d4d928d60d38ade9e3305', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1cf612df-2e43-4b29-bdb2-6253f8c086ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f66771a-4d2d-438c-ad16-4a45d6686a0f, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=ca62000c-903a-41ab-abeb-c6427e62fa46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.257 159331 INFO neutron.agent.ovn.metadata.agent [-] Port ca62000c-903a-41ab-abeb-c6427e62fa46 in datapath de54f204-706b-4f67-80ee-0be6151f732b bound to our chassis#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.259 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de54f204-706b-4f67-80ee-0be6151f732b#033[00m
Jan 26 13:31:36 np0005596060 NetworkManager[48900]: <info>  [1769452296.2610] device (tapca62000c-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:31:36 np0005596060 NetworkManager[48900]: <info>  [1769452296.2626] device (tapca62000c-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.277 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc60251-2d1c-453a-95ee-a09e5aa87f75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.279 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapde54f204-71 in ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.282 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapde54f204-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.282 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[18297a0d-5db9-4d61-90cc-0eb007ec0682]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.284 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5460ac73-1888-464f-a4a5-67c5d2ddc6c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 systemd[1]: Started Virtual Machine qemu-10-instance-00000016.
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.296 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.301 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[a7659662-e02c-4c4b-a69f-14649d52b1c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_controller[148842]: 2026-01-26T18:31:36Z|00121|binding|INFO|Setting lport ca62000c-903a-41ab-abeb-c6427e62fa46 ovn-installed in OVS
Jan 26 13:31:36 np0005596060 ovn_controller[148842]: 2026-01-26T18:31:36Z|00122|binding|INFO|Setting lport ca62000c-903a-41ab-abeb-c6427e62fa46 up in Southbound
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.305 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.317 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[af4074a9-19f0-443f-992f-9498d7bcf65c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.351 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[f4918198-5212-4b6f-be1f-86fdeb2b623e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 NetworkManager[48900]: <info>  [1769452296.3571] manager: (tapde54f204-70): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.356 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9ad3e6-549a-45a8-a5ac-ebfe071be3f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 systemd-udevd[286492]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.392 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[3951714f-dad2-4fb3-aae7-d9779ac9f732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.396 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[aeeffb01-8c62-497b-a0d1-36972c845802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 NetworkManager[48900]: <info>  [1769452296.4194] device (tapde54f204-70): carrier: link connected
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.429 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[e7125595-f729-48b3-a151-1a5093d40c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.447 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[210ea903-1389-46a5-9c30-b797d571c497]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde54f204-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:c6:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606100, 'reachable_time': 43402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286523, 'error': None, 'target': 'ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.465 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[531f0cb8-b1a5-491e-8a60-ee5751ba1234]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:c618'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606100, 'tstamp': 606100}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286524, 'error': None, 'target': 'ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.486 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e4976bcd-919d-4caf-8ec0-1e237471c359]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde54f204-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:c6:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606100, 'reachable_time': 43402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286525, 'error': None, 'target': 'ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.520 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f1877b17-87b3-4488-8e63-0c7379d2ca25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.585 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a80520-e14e-4fa5-a6be-01427ce02d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.587 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde54f204-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.588 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.589 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde54f204-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:31:36 np0005596060 kernel: tapde54f204-70: entered promiscuous mode
Jan 26 13:31:36 np0005596060 NetworkManager[48900]: <info>  [1769452296.5924] manager: (tapde54f204-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.592 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.593 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.595 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde54f204-70, col_values=(('external_ids', {'iface-id': 'c2c971c3-99f6-4118-be80-725c9fa469d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.596 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:36 np0005596060 ovn_controller[148842]: 2026-01-26T18:31:36Z|00123|binding|INFO|Releasing lport c2c971c3-99f6-4118-be80-725c9fa469d2 from this chassis (sb_readonly=0)
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.597 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.599 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/de54f204-706b-4f67-80ee-0be6151f732b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/de54f204-706b-4f67-80ee-0be6151f732b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.600 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[25f94873-353b-4548-896d-984fbee7f2cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:31:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 86 op/s
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.602 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-de54f204-706b-4f67-80ee-0be6151f732b
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/de54f204-706b-4f67-80ee-0be6151f732b.pid.haproxy
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID de54f204-706b-4f67-80ee-0be6151f732b
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:31:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:36.603 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b', 'env', 'PROCESS_TAG=haproxy-de54f204-706b-4f67-80ee-0be6151f732b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/de54f204-706b-4f67-80ee-0be6151f732b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.610 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.807 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452296.8073266, ebdb1528-b5f5-4593-8801-7a25fc358497 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.809 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] VM Started (Lifecycle Event)#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.865 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.871 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452296.8074574, ebdb1528-b5f5-4593-8801-7a25fc358497 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.871 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.906 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.923 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.926 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.959 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:31:36 np0005596060 nova_compute[247421]: 2026-01-26 18:31:36.979 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:37 np0005596060 podman[286673]: 2026-01-26 18:31:37.065260832 +0000 UTC m=+0.074908224 container create 5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:31:37 np0005596060 podman[286673]: 2026-01-26 18:31:37.01268614 +0000 UTC m=+0.022333552 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:31:37 np0005596060 systemd[1]: Started libpod-conmon-5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913.scope.
Jan 26 13:31:37 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:31:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:37.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:37 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d2b953a2b6457af4d8c1c97a0960cb2301b4fff660cbbebe5c189a94911c1fc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:37 np0005596060 podman[286673]: 2026-01-26 18:31:37.223380269 +0000 UTC m=+0.233027671 container init 5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 13:31:37 np0005596060 podman[286673]: 2026-01-26 18:31:37.231487573 +0000 UTC m=+0.241134965 container start 5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 13:31:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:31:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:37.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.256 247428 DEBUG nova.compute.manager [req-6f0a5e4e-85ab-4f7b-b056-c0bcf9218fcd req-a2f59d32-ae4a-49a4-b1d5-cea16d18dfa0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:31:37 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[286723]: [NOTICE]   (286730) : New worker (286732) forked
Jan 26 13:31:37 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[286723]: [NOTICE]   (286730) : Loading success.
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.258 247428 DEBUG oslo_concurrency.lockutils [req-6f0a5e4e-85ab-4f7b-b056-c0bcf9218fcd req-a2f59d32-ae4a-49a4-b1d5-cea16d18dfa0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.259 247428 DEBUG oslo_concurrency.lockutils [req-6f0a5e4e-85ab-4f7b-b056-c0bcf9218fcd req-a2f59d32-ae4a-49a4-b1d5-cea16d18dfa0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.259 247428 DEBUG oslo_concurrency.lockutils [req-6f0a5e4e-85ab-4f7b-b056-c0bcf9218fcd req-a2f59d32-ae4a-49a4-b1d5-cea16d18dfa0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.259 247428 DEBUG nova.compute.manager [req-6f0a5e4e-85ab-4f7b-b056-c0bcf9218fcd req-a2f59d32-ae4a-49a4-b1d5-cea16d18dfa0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Processing event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.260 247428 DEBUG nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.264 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452297.2646797, ebdb1528-b5f5-4593-8801-7a25fc358497 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.265 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.267 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.270 247428 INFO nova.virt.libvirt.driver [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance spawned successfully.#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.270 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.314 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.318 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.459 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.459 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.460 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.460 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.460 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.461 247428 DEBUG nova.virt.libvirt.driver [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.463 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:31:37 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5d0b7494-3f3f-49aa-ba6d-a247b68667a7 does not exist
Jan 26 13:31:37 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 916ea90d-1478-4f0b-ad0e-463ceabc69c7 does not exist
Jan 26 13:31:37 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 68a05509-8bc4-41e3-b7f7-348583e096ae does not exist
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:31:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.663 247428 INFO nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Took 15.88 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.663 247428 DEBUG nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.872 247428 INFO nova.compute.manager [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Took 17.56 seconds to build instance.#033[00m
Jan 26 13:31:37 np0005596060 nova_compute[247421]: 2026-01-26 18:31:37.978 247428 DEBUG oslo_concurrency.lockutils [None req-2c9cc2f5-e45e-477f-adc7-cdfedc6ab705 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:31:38 np0005596060 podman[286898]: 2026-01-26 18:31:38.26764777 +0000 UTC m=+0.077546822 container create 2b828976944ea4bc5bbf667758ab6c0bb93b6afa7bddeebd698ad2145c714bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:31:38 np0005596060 systemd[1]: Started libpod-conmon-2b828976944ea4bc5bbf667758ab6c0bb93b6afa7bddeebd698ad2145c714bea.scope.
Jan 26 13:31:38 np0005596060 podman[286898]: 2026-01-26 18:31:38.224818602 +0000 UTC m=+0.034717654 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:31:38 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:31:38 np0005596060 podman[286898]: 2026-01-26 18:31:38.405086066 +0000 UTC m=+0.214985128 container init 2b828976944ea4bc5bbf667758ab6c0bb93b6afa7bddeebd698ad2145c714bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:31:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:31:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:31:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:31:38 np0005596060 podman[286898]: 2026-01-26 18:31:38.416728579 +0000 UTC m=+0.226627611 container start 2b828976944ea4bc5bbf667758ab6c0bb93b6afa7bddeebd698ad2145c714bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 13:31:38 np0005596060 podman[286898]: 2026-01-26 18:31:38.420812531 +0000 UTC m=+0.230711593 container attach 2b828976944ea4bc5bbf667758ab6c0bb93b6afa7bddeebd698ad2145c714bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:31:38 np0005596060 suspicious_wu[286938]: 167 167
Jan 26 13:31:38 np0005596060 systemd[1]: libpod-2b828976944ea4bc5bbf667758ab6c0bb93b6afa7bddeebd698ad2145c714bea.scope: Deactivated successfully.
Jan 26 13:31:38 np0005596060 conmon[286938]: conmon 2b828976944ea4bc5bbf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b828976944ea4bc5bbf667758ab6c0bb93b6afa7bddeebd698ad2145c714bea.scope/container/memory.events
Jan 26 13:31:38 np0005596060 podman[286898]: 2026-01-26 18:31:38.426467643 +0000 UTC m=+0.236366675 container died 2b828976944ea4bc5bbf667758ab6c0bb93b6afa7bddeebd698ad2145c714bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wu, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 13:31:38 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7abe363b24ea55ec2a949fb0bc26de7815b149a90070ce971fc7d943a7110e7b-merged.mount: Deactivated successfully.
Jan 26 13:31:38 np0005596060 podman[286898]: 2026-01-26 18:31:38.518642211 +0000 UTC m=+0.328541243 container remove 2b828976944ea4bc5bbf667758ab6c0bb93b6afa7bddeebd698ad2145c714bea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 13:31:38 np0005596060 systemd[1]: libpod-conmon-2b828976944ea4bc5bbf667758ab6c0bb93b6afa7bddeebd698ad2145c714bea.scope: Deactivated successfully.
Jan 26 13:31:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 798 KiB/s wr, 71 op/s
Jan 26 13:31:38 np0005596060 podman[286988]: 2026-01-26 18:31:38.743401824 +0000 UTC m=+0.065442957 container create 312870b7cc392962c60536c75ac7df17e8fd69c340eb6e4a9e8fc27a57b70497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:31:38 np0005596060 systemd[1]: Started libpod-conmon-312870b7cc392962c60536c75ac7df17e8fd69c340eb6e4a9e8fc27a57b70497.scope.
Jan 26 13:31:38 np0005596060 podman[286988]: 2026-01-26 18:31:38.705521011 +0000 UTC m=+0.027562164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:31:38 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:31:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5727b969e3f6250662adc0a452a1d723ff8398e8042fb776c5d342627ec20bbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5727b969e3f6250662adc0a452a1d723ff8398e8042fb776c5d342627ec20bbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5727b969e3f6250662adc0a452a1d723ff8398e8042fb776c5d342627ec20bbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5727b969e3f6250662adc0a452a1d723ff8398e8042fb776c5d342627ec20bbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:38 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5727b969e3f6250662adc0a452a1d723ff8398e8042fb776c5d342627ec20bbe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:38 np0005596060 podman[286988]: 2026-01-26 18:31:38.827535589 +0000 UTC m=+0.149576722 container init 312870b7cc392962c60536c75ac7df17e8fd69c340eb6e4a9e8fc27a57b70497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:31:38 np0005596060 podman[286988]: 2026-01-26 18:31:38.846124247 +0000 UTC m=+0.168165380 container start 312870b7cc392962c60536c75ac7df17e8fd69c340eb6e4a9e8fc27a57b70497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:31:38 np0005596060 podman[286988]: 2026-01-26 18:31:38.848978079 +0000 UTC m=+0.171019212 container attach 312870b7cc392962c60536c75ac7df17e8fd69c340eb6e4a9e8fc27a57b70497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 13:31:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:39.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:39.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:39 np0005596060 nova_compute[247421]: 2026-01-26 18:31:39.563 247428 DEBUG nova.compute.manager [req-63651e9c-b68e-4f48-a562-9e983dfa0990 req-12eb1b1a-4662-4067-8c8f-bb64d6de6ef0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:31:39 np0005596060 nova_compute[247421]: 2026-01-26 18:31:39.565 247428 DEBUG oslo_concurrency.lockutils [req-63651e9c-b68e-4f48-a562-9e983dfa0990 req-12eb1b1a-4662-4067-8c8f-bb64d6de6ef0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:31:39 np0005596060 nova_compute[247421]: 2026-01-26 18:31:39.566 247428 DEBUG oslo_concurrency.lockutils [req-63651e9c-b68e-4f48-a562-9e983dfa0990 req-12eb1b1a-4662-4067-8c8f-bb64d6de6ef0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:31:39 np0005596060 nova_compute[247421]: 2026-01-26 18:31:39.566 247428 DEBUG oslo_concurrency.lockutils [req-63651e9c-b68e-4f48-a562-9e983dfa0990 req-12eb1b1a-4662-4067-8c8f-bb64d6de6ef0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:31:39 np0005596060 nova_compute[247421]: 2026-01-26 18:31:39.566 247428 DEBUG nova.compute.manager [req-63651e9c-b68e-4f48-a562-9e983dfa0990 req-12eb1b1a-4662-4067-8c8f-bb64d6de6ef0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] No waiting events found dispatching network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:31:39 np0005596060 nova_compute[247421]: 2026-01-26 18:31:39.566 247428 WARNING nova.compute.manager [req-63651e9c-b68e-4f48-a562-9e983dfa0990 req-12eb1b1a-4662-4067-8c8f-bb64d6de6ef0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received unexpected event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 for instance with vm_state active and task_state None.#033[00m
Jan 26 13:31:39 np0005596060 beautiful_banzai[287004]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:31:39 np0005596060 beautiful_banzai[287004]: --> relative data size: 1.0
Jan 26 13:31:39 np0005596060 beautiful_banzai[287004]: --> All data devices are unavailable
Jan 26 13:31:39 np0005596060 systemd[1]: libpod-312870b7cc392962c60536c75ac7df17e8fd69c340eb6e4a9e8fc27a57b70497.scope: Deactivated successfully.
Jan 26 13:31:39 np0005596060 podman[287019]: 2026-01-26 18:31:39.76051312 +0000 UTC m=+0.038194411 container died 312870b7cc392962c60536c75ac7df17e8fd69c340eb6e4a9e8fc27a57b70497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:31:39 np0005596060 systemd[1]: var-lib-containers-storage-overlay-5727b969e3f6250662adc0a452a1d723ff8398e8042fb776c5d342627ec20bbe-merged.mount: Deactivated successfully.
Jan 26 13:31:39 np0005596060 podman[287019]: 2026-01-26 18:31:39.877438001 +0000 UTC m=+0.155119272 container remove 312870b7cc392962c60536c75ac7df17e8fd69c340eb6e4a9e8fc27a57b70497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:31:39 np0005596060 systemd[1]: libpod-conmon-312870b7cc392962c60536c75ac7df17e8fd69c340eb6e4a9e8fc27a57b70497.scope: Deactivated successfully.
Jan 26 13:31:40 np0005596060 podman[287177]: 2026-01-26 18:31:40.476519566 +0000 UTC m=+0.053432444 container create 6344085217f4234ed087f8c3dfbc7a764f73f44701e7d70c57536a02e1eec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:31:40 np0005596060 systemd[1]: Started libpod-conmon-6344085217f4234ed087f8c3dfbc7a764f73f44701e7d70c57536a02e1eec43a.scope.
Jan 26 13:31:40 np0005596060 podman[287177]: 2026-01-26 18:31:40.444146922 +0000 UTC m=+0.021059820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:31:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:31:40 np0005596060 podman[287177]: 2026-01-26 18:31:40.556457037 +0000 UTC m=+0.133369945 container init 6344085217f4234ed087f8c3dfbc7a764f73f44701e7d70c57536a02e1eec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 13:31:40 np0005596060 podman[287177]: 2026-01-26 18:31:40.5641621 +0000 UTC m=+0.141074978 container start 6344085217f4234ed087f8c3dfbc7a764f73f44701e7d70c57536a02e1eec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:31:40 np0005596060 podman[287177]: 2026-01-26 18:31:40.568378266 +0000 UTC m=+0.145291154 container attach 6344085217f4234ed087f8c3dfbc7a764f73f44701e7d70c57536a02e1eec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:31:40 np0005596060 reverent_gagarin[287193]: 167 167
Jan 26 13:31:40 np0005596060 systemd[1]: libpod-6344085217f4234ed087f8c3dfbc7a764f73f44701e7d70c57536a02e1eec43a.scope: Deactivated successfully.
Jan 26 13:31:40 np0005596060 podman[287177]: 2026-01-26 18:31:40.572195312 +0000 UTC m=+0.149108190 container died 6344085217f4234ed087f8c3dfbc7a764f73f44701e7d70c57536a02e1eec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:31:40 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6cd2211508ef5dc08808eb74a24ee0f8b2e83e46177fc4ab77d8d25e85aadbd7-merged.mount: Deactivated successfully.
Jan 26 13:31:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 12 KiB/s wr, 11 op/s
Jan 26 13:31:40 np0005596060 podman[287177]: 2026-01-26 18:31:40.615281646 +0000 UTC m=+0.192194524 container remove 6344085217f4234ed087f8c3dfbc7a764f73f44701e7d70c57536a02e1eec43a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_gagarin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:31:40 np0005596060 systemd[1]: libpod-conmon-6344085217f4234ed087f8c3dfbc7a764f73f44701e7d70c57536a02e1eec43a.scope: Deactivated successfully.
Jan 26 13:31:40 np0005596060 podman[287216]: 2026-01-26 18:31:40.810910675 +0000 UTC m=+0.046892650 container create 8a3e7ebb0d5039135fa708db009b7d7d08e0e5d9225495536793ce509656bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wright, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:31:40 np0005596060 systemd[1]: Started libpod-conmon-8a3e7ebb0d5039135fa708db009b7d7d08e0e5d9225495536793ce509656bc76.scope.
Jan 26 13:31:40 np0005596060 podman[287216]: 2026-01-26 18:31:40.789961229 +0000 UTC m=+0.025943224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:31:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:31:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87728525d0a0f4e04c43d7a4c45ff1d79614fe9d4c7dc5da84687f206bcac7e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87728525d0a0f4e04c43d7a4c45ff1d79614fe9d4c7dc5da84687f206bcac7e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87728525d0a0f4e04c43d7a4c45ff1d79614fe9d4c7dc5da84687f206bcac7e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87728525d0a0f4e04c43d7a4c45ff1d79614fe9d4c7dc5da84687f206bcac7e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:40 np0005596060 podman[287216]: 2026-01-26 18:31:40.92441485 +0000 UTC m=+0.160396845 container init 8a3e7ebb0d5039135fa708db009b7d7d08e0e5d9225495536793ce509656bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:31:40 np0005596060 podman[287216]: 2026-01-26 18:31:40.932989555 +0000 UTC m=+0.168971530 container start 8a3e7ebb0d5039135fa708db009b7d7d08e0e5d9225495536793ce509656bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:31:40 np0005596060 podman[287216]: 2026-01-26 18:31:40.936574536 +0000 UTC m=+0.172556531 container attach 8a3e7ebb0d5039135fa708db009b7d7d08e0e5d9225495536793ce509656bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:31:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:41.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:31:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:41.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:31:41 np0005596060 laughing_wright[287232]: {
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:    "1": [
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:        {
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "devices": [
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "/dev/loop3"
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            ],
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "lv_name": "ceph_lv0",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "lv_size": "7511998464",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "name": "ceph_lv0",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "tags": {
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.cluster_name": "ceph",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.crush_device_class": "",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.encrypted": "0",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.osd_id": "1",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.type": "block",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:                "ceph.vdo": "0"
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            },
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "type": "block",
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:            "vg_name": "ceph_vg0"
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:        }
Jan 26 13:31:41 np0005596060 laughing_wright[287232]:    ]
Jan 26 13:31:41 np0005596060 laughing_wright[287232]: }
Jan 26 13:31:41 np0005596060 systemd[1]: libpod-8a3e7ebb0d5039135fa708db009b7d7d08e0e5d9225495536793ce509656bc76.scope: Deactivated successfully.
Jan 26 13:31:41 np0005596060 podman[287216]: 2026-01-26 18:31:41.728267525 +0000 UTC m=+0.964249530 container died 8a3e7ebb0d5039135fa708db009b7d7d08e0e5d9225495536793ce509656bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:31:41 np0005596060 systemd[1]: var-lib-containers-storage-overlay-87728525d0a0f4e04c43d7a4c45ff1d79614fe9d4c7dc5da84687f206bcac7e3-merged.mount: Deactivated successfully.
Jan 26 13:31:41 np0005596060 podman[287216]: 2026-01-26 18:31:41.790616783 +0000 UTC m=+1.026598758 container remove 8a3e7ebb0d5039135fa708db009b7d7d08e0e5d9225495536793ce509656bc76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 13:31:41 np0005596060 systemd[1]: libpod-conmon-8a3e7ebb0d5039135fa708db009b7d7d08e0e5d9225495536793ce509656bc76.scope: Deactivated successfully.
Jan 26 13:31:41 np0005596060 nova_compute[247421]: 2026-01-26 18:31:41.908 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:41 np0005596060 nova_compute[247421]: 2026-01-26 18:31:41.982 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:42 np0005596060 podman[287397]: 2026-01-26 18:31:42.422619646 +0000 UTC m=+0.037129285 container create bd09464fb675b735fdbe0e2fd5d445294b34f0916ec5b4c75b89d14a9ddf1534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 26 13:31:42 np0005596060 systemd[1]: Started libpod-conmon-bd09464fb675b735fdbe0e2fd5d445294b34f0916ec5b4c75b89d14a9ddf1534.scope.
Jan 26 13:31:42 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:31:42 np0005596060 podman[287397]: 2026-01-26 18:31:42.486453201 +0000 UTC m=+0.100962860 container init bd09464fb675b735fdbe0e2fd5d445294b34f0916ec5b4c75b89d14a9ddf1534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:31:42 np0005596060 podman[287397]: 2026-01-26 18:31:42.492922714 +0000 UTC m=+0.107432353 container start bd09464fb675b735fdbe0e2fd5d445294b34f0916ec5b4c75b89d14a9ddf1534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:31:42 np0005596060 podman[287397]: 2026-01-26 18:31:42.496015692 +0000 UTC m=+0.110525361 container attach bd09464fb675b735fdbe0e2fd5d445294b34f0916ec5b4c75b89d14a9ddf1534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:31:42 np0005596060 relaxed_dewdney[287414]: 167 167
Jan 26 13:31:42 np0005596060 systemd[1]: libpod-bd09464fb675b735fdbe0e2fd5d445294b34f0916ec5b4c75b89d14a9ddf1534.scope: Deactivated successfully.
Jan 26 13:31:42 np0005596060 podman[287397]: 2026-01-26 18:31:42.497264723 +0000 UTC m=+0.111774372 container died bd09464fb675b735fdbe0e2fd5d445294b34f0916ec5b4c75b89d14a9ddf1534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 26 13:31:42 np0005596060 podman[287397]: 2026-01-26 18:31:42.406564062 +0000 UTC m=+0.021073721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:31:42 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1c9928ea658ba4328f47371e233d226e23ee45f700678fdacbc89e7122617bb7-merged.mount: Deactivated successfully.
Jan 26 13:31:42 np0005596060 podman[287397]: 2026-01-26 18:31:42.53449566 +0000 UTC m=+0.149005299 container remove bd09464fb675b735fdbe0e2fd5d445294b34f0916ec5b4c75b89d14a9ddf1534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 13:31:42 np0005596060 systemd[1]: libpod-conmon-bd09464fb675b735fdbe0e2fd5d445294b34f0916ec5b4c75b89d14a9ddf1534.scope: Deactivated successfully.
Jan 26 13:31:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 142 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 594 KiB/s rd, 882 KiB/s wr, 45 op/s
Jan 26 13:31:42 np0005596060 podman[287441]: 2026-01-26 18:31:42.689066737 +0000 UTC m=+0.022950249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:31:42 np0005596060 NetworkManager[48900]: <info>  [1769452302.8505] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Jan 26 13:31:42 np0005596060 NetworkManager[48900]: <info>  [1769452302.8513] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Jan 26 13:31:42 np0005596060 nova_compute[247421]: 2026-01-26 18:31:42.858 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:42 np0005596060 podman[287441]: 2026-01-26 18:31:42.938479268 +0000 UTC m=+0.272362760 container create 794b07fa3fd6ef80588dd49f1c8ac0242d834e3ebc580b551e72cfd91abceab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_aryabhata, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 13:31:42 np0005596060 systemd[1]: Started libpod-conmon-794b07fa3fd6ef80588dd49f1c8ac0242d834e3ebc580b551e72cfd91abceab4.scope.
Jan 26 13:31:43 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:31:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13ca55e63ab768c3c2b5c0a7e3b425b3092046de5b1a025077c3b0d686f3e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13ca55e63ab768c3c2b5c0a7e3b425b3092046de5b1a025077c3b0d686f3e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13ca55e63ab768c3c2b5c0a7e3b425b3092046de5b1a025077c3b0d686f3e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13ca55e63ab768c3c2b5c0a7e3b425b3092046de5b1a025077c3b0d686f3e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:31:43 np0005596060 nova_compute[247421]: 2026-01-26 18:31:43.138 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:43 np0005596060 ovn_controller[148842]: 2026-01-26T18:31:43Z|00124|binding|INFO|Releasing lport c2c971c3-99f6-4118-be80-725c9fa469d2 from this chassis (sb_readonly=0)
Jan 26 13:31:43 np0005596060 nova_compute[247421]: 2026-01-26 18:31:43.164 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:43 np0005596060 podman[287441]: 2026-01-26 18:31:43.193241114 +0000 UTC m=+0.527124636 container init 794b07fa3fd6ef80588dd49f1c8ac0242d834e3ebc580b551e72cfd91abceab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:31:43 np0005596060 podman[287456]: 2026-01-26 18:31:43.199460541 +0000 UTC m=+0.209679954 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:31:43 np0005596060 podman[287441]: 2026-01-26 18:31:43.20221927 +0000 UTC m=+0.536102762 container start 794b07fa3fd6ef80588dd49f1c8ac0242d834e3ebc580b551e72cfd91abceab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_aryabhata, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:31:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:43.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:43.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:43 np0005596060 podman[287441]: 2026-01-26 18:31:43.304584704 +0000 UTC m=+0.638468226 container attach 794b07fa3fd6ef80588dd49f1c8ac0242d834e3ebc580b551e72cfd91abceab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_aryabhata, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 13:31:43 np0005596060 podman[287454]: 2026-01-26 18:31:43.387822688 +0000 UTC m=+0.397980400 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 13:31:44 np0005596060 brave_aryabhata[287468]: {
Jan 26 13:31:44 np0005596060 brave_aryabhata[287468]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:31:44 np0005596060 brave_aryabhata[287468]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:31:44 np0005596060 brave_aryabhata[287468]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:31:44 np0005596060 brave_aryabhata[287468]:        "osd_id": 1,
Jan 26 13:31:44 np0005596060 brave_aryabhata[287468]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:31:44 np0005596060 brave_aryabhata[287468]:        "type": "bluestore"
Jan 26 13:31:44 np0005596060 brave_aryabhata[287468]:    }
Jan 26 13:31:44 np0005596060 brave_aryabhata[287468]: }
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:31:44 np0005596060 systemd[1]: libpod-794b07fa3fd6ef80588dd49f1c8ac0242d834e3ebc580b551e72cfd91abceab4.scope: Deactivated successfully.
Jan 26 13:31:44 np0005596060 conmon[287468]: conmon 794b07fa3fd6ef80588d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-794b07fa3fd6ef80588dd49f1c8ac0242d834e3ebc580b551e72cfd91abceab4.scope/container/memory.events
Jan 26 13:31:44 np0005596060 podman[287441]: 2026-01-26 18:31:44.109835775 +0000 UTC m=+1.443719267 container died 794b07fa3fd6ef80588dd49f1c8ac0242d834e3ebc580b551e72cfd91abceab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_aryabhata, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:31:44 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6c13ca55e63ab768c3c2b5c0a7e3b425b3092046de5b1a025077c3b0d686f3e9-merged.mount: Deactivated successfully.
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:31:44
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'vms', 'default.rgw.control']
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:31:44 np0005596060 podman[287441]: 2026-01-26 18:31:44.157861232 +0000 UTC m=+1.491744724 container remove 794b07fa3fd6ef80588dd49f1c8ac0242d834e3ebc580b551e72cfd91abceab4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 13:31:44 np0005596060 systemd[1]: libpod-conmon-794b07fa3fd6ef80588dd49f1c8ac0242d834e3ebc580b551e72cfd91abceab4.scope: Deactivated successfully.
Jan 26 13:31:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:31:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:31:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:31:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7ba0a53e-942c-4e67-86b9-184b00ab8d13 does not exist
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b90fc472-d8e6-4818-a071-c1312502ed12 does not exist
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 281f46dc-3488-4cd2-8479-61e1b7521ffc does not exist
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 167 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:31:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:31:45 np0005596060 nova_compute[247421]: 2026-01-26 18:31:45.090 247428 DEBUG nova.compute.manager [req-7b59534a-e46c-47a1-9428-d2f7c0a613d3 req-9a73f5b0-d4c5-4995-bc26-79a387950c66 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-changed-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:31:45 np0005596060 nova_compute[247421]: 2026-01-26 18:31:45.091 247428 DEBUG nova.compute.manager [req-7b59534a-e46c-47a1-9428-d2f7c0a613d3 req-9a73f5b0-d4c5-4995-bc26-79a387950c66 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Refreshing instance network info cache due to event network-changed-ca62000c-903a-41ab-abeb-c6427e62fa46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:31:45 np0005596060 nova_compute[247421]: 2026-01-26 18:31:45.091 247428 DEBUG oslo_concurrency.lockutils [req-7b59534a-e46c-47a1-9428-d2f7c0a613d3 req-9a73f5b0-d4c5-4995-bc26-79a387950c66 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:31:45 np0005596060 nova_compute[247421]: 2026-01-26 18:31:45.091 247428 DEBUG oslo_concurrency.lockutils [req-7b59534a-e46c-47a1-9428-d2f7c0a613d3 req-9a73f5b0-d4c5-4995-bc26-79a387950c66 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:31:45 np0005596060 nova_compute[247421]: 2026-01-26 18:31:45.091 247428 DEBUG nova.network.neutron [req-7b59534a-e46c-47a1-9428-d2f7c0a613d3 req-9a73f5b0-d4c5-4995-bc26-79a387950c66 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Refreshing network info cache for port ca62000c-903a-41ab-abeb-c6427e62fa46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:31:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:45.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:31:45 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:31:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 167 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 26 13:31:46 np0005596060 nova_compute[247421]: 2026-01-26 18:31:46.911 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:46 np0005596060 nova_compute[247421]: 2026-01-26 18:31:46.984 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:47.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:47.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 167 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 26 13:31:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:31:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:49.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:31:49 np0005596060 nova_compute[247421]: 2026-01-26 18:31:49.258 247428 DEBUG nova.network.neutron [req-7b59534a-e46c-47a1-9428-d2f7c0a613d3 req-9a73f5b0-d4c5-4995-bc26-79a387950c66 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updated VIF entry in instance network info cache for port ca62000c-903a-41ab-abeb-c6427e62fa46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:31:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:49 np0005596060 nova_compute[247421]: 2026-01-26 18:31:49.260 247428 DEBUG nova.network.neutron [req-7b59534a-e46c-47a1-9428-d2f7c0a613d3 req-9a73f5b0-d4c5-4995-bc26-79a387950c66 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:31:49 np0005596060 nova_compute[247421]: 2026-01-26 18:31:49.528 247428 DEBUG oslo_concurrency.lockutils [req-7b59534a-e46c-47a1-9428-d2f7c0a613d3 req-9a73f5b0-d4c5-4995-bc26-79a387950c66 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:31:50 np0005596060 ovn_controller[148842]: 2026-01-26T18:31:50Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:ab:0c 10.100.0.9
Jan 26 13:31:50 np0005596060 ovn_controller[148842]: 2026-01-26T18:31:50Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:ab:0c 10.100.0.9
Jan 26 13:31:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 167 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 130 op/s
Jan 26 13:31:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:51.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:51.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:51 np0005596060 nova_compute[247421]: 2026-01-26 18:31:51.913 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:51 np0005596060 nova_compute[247421]: 2026-01-26 18:31:51.987 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 170 MiB data, 386 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 142 op/s
Jan 26 13:31:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:53.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:31:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:53.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:31:53 np0005596060 nova_compute[247421]: 2026-01-26 18:31:53.265 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:53 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:53.265 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:31:53 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:31:53.266 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:31:53 np0005596060 ovn_controller[148842]: 2026-01-26T18:31:53Z|00125|binding|INFO|Releasing lport c2c971c3-99f6-4118-be80-725c9fa469d2 from this chassis (sb_readonly=0)
Jan 26 13:31:53 np0005596060 nova_compute[247421]: 2026-01-26 18:31:53.862 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 200 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.4 MiB/s wr, 161 op/s
Jan 26 13:31:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:55.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:55.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:55 np0005596060 ovn_controller[148842]: 2026-01-26T18:31:55Z|00126|binding|INFO|Releasing lport c2c971c3-99f6-4118-be80-725c9fa469d2 from this chassis (sb_readonly=0)
Jan 26 13:31:55 np0005596060 nova_compute[247421]: 2026-01-26 18:31:55.629 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:31:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 200 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 13:31:56 np0005596060 nova_compute[247421]: 2026-01-26 18:31:56.915 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:56 np0005596060 nova_compute[247421]: 2026-01-26 18:31:56.989 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:31:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:57.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:57.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:31:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 200 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 13:31:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:31:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:31:59.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:31:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:31:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:31:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:31:59.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 200 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 13:32:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:01.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:01 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:01.269 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:32:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:01.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:01 np0005596060 nova_compute[247421]: 2026-01-26 18:32:01.917 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:01 np0005596060 nova_compute[247421]: 2026-01-26 18:32:01.992 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 200 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 382 KiB/s rd, 2.2 MiB/s wr, 67 op/s
Jan 26 13:32:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:03.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:32:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:03.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:32:03 np0005596060 nova_compute[247421]: 2026-01-26 18:32:03.394 247428 DEBUG oslo_concurrency.lockutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:03 np0005596060 nova_compute[247421]: 2026-01-26 18:32:03.395 247428 DEBUG oslo_concurrency.lockutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:03 np0005596060 nova_compute[247421]: 2026-01-26 18:32:03.395 247428 INFO nova.compute.manager [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Shelving#033[00m
Jan 26 13:32:03 np0005596060 nova_compute[247421]: 2026-01-26 18:32:03.833 247428 DEBUG nova.virt.libvirt.driver [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002177227282244556 of space, bias 1.0, pg target 0.6531681846733669 quantized to 32 (current 32)
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002164868032291085 of space, bias 1.0, pg target 0.6494604096873255 quantized to 32 (current 32)
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:32:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:32:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 200 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 293 KiB/s rd, 2.0 MiB/s wr, 57 op/s
Jan 26 13:32:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:05.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:05.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 200 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 3.7 KiB/s rd, 38 KiB/s wr, 4 op/s
Jan 26 13:32:06 np0005596060 kernel: tapca62000c-90 (unregistering): left promiscuous mode
Jan 26 13:32:06 np0005596060 NetworkManager[48900]: <info>  [1769452326.6239] device (tapca62000c-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:32:06 np0005596060 ovn_controller[148842]: 2026-01-26T18:32:06Z|00127|binding|INFO|Releasing lport ca62000c-903a-41ab-abeb-c6427e62fa46 from this chassis (sb_readonly=0)
Jan 26 13:32:06 np0005596060 ovn_controller[148842]: 2026-01-26T18:32:06Z|00128|binding|INFO|Setting lport ca62000c-903a-41ab-abeb-c6427e62fa46 down in Southbound
Jan 26 13:32:06 np0005596060 nova_compute[247421]: 2026-01-26 18:32:06.632 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:06 np0005596060 ovn_controller[148842]: 2026-01-26T18:32:06Z|00129|binding|INFO|Removing iface tapca62000c-90 ovn-installed in OVS
Jan 26 13:32:06 np0005596060 nova_compute[247421]: 2026-01-26 18:32:06.633 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:06 np0005596060 nova_compute[247421]: 2026-01-26 18:32:06.651 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:06 np0005596060 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 26 13:32:06 np0005596060 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000016.scope: Consumed 14.351s CPU time.
Jan 26 13:32:06 np0005596060 systemd-machined[213879]: Machine qemu-10-instance-00000016 terminated.
Jan 26 13:32:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:06.746 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:ab:0c 10.100.0.9'], port_security=['fa:16:3e:8b:ab:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ebdb1528-b5f5-4593-8801-7a25fc358497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de54f204-706b-4f67-80ee-0be6151f732b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2387917610d4d928d60d38ade9e3305', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1cf612df-2e43-4b29-bdb2-6253f8c086ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f66771a-4d2d-438c-ad16-4a45d6686a0f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=ca62000c-903a-41ab-abeb-c6427e62fa46) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:32:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:06.747 159331 INFO neutron.agent.ovn.metadata.agent [-] Port ca62000c-903a-41ab-abeb-c6427e62fa46 in datapath de54f204-706b-4f67-80ee-0be6151f732b unbound from our chassis#033[00m
Jan 26 13:32:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:06.748 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de54f204-706b-4f67-80ee-0be6151f732b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:32:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:06.749 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[690b0ded-2cac-4b04-8c09-61e215051e55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:06.750 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b namespace which is not needed anymore#033[00m
Jan 26 13:32:06 np0005596060 nova_compute[247421]: 2026-01-26 18:32:06.873 247428 INFO nova.virt.libvirt.driver [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance shutdown successfully after 3 seconds.#033[00m
Jan 26 13:32:06 np0005596060 nova_compute[247421]: 2026-01-26 18:32:06.883 247428 INFO nova.virt.libvirt.driver [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance destroyed successfully.#033[00m
Jan 26 13:32:06 np0005596060 nova_compute[247421]: 2026-01-26 18:32:06.884 247428 DEBUG nova.objects.instance [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'numa_topology' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:06 np0005596060 nova_compute[247421]: 2026-01-26 18:32:06.919 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:06 np0005596060 nova_compute[247421]: 2026-01-26 18:32:06.994 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:07 np0005596060 nova_compute[247421]: 2026-01-26 18:32:07.058 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:32:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:07.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:32:07 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[286723]: [NOTICE]   (286730) : haproxy version is 2.8.14-c23fe91
Jan 26 13:32:07 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[286723]: [NOTICE]   (286730) : path to executable is /usr/sbin/haproxy
Jan 26 13:32:07 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[286723]: [WARNING]  (286730) : Exiting Master process...
Jan 26 13:32:07 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[286723]: [WARNING]  (286730) : Exiting Master process...
Jan 26 13:32:07 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[286723]: [ALERT]    (286730) : Current worker (286732) exited with code 143 (Terminated)
Jan 26 13:32:07 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[286723]: [WARNING]  (286730) : All workers exited. Exiting... (0)
Jan 26 13:32:07 np0005596060 systemd[1]: libpod-5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913.scope: Deactivated successfully.
Jan 26 13:32:07 np0005596060 podman[287670]: 2026-01-26 18:32:07.275043478 +0000 UTC m=+0.423288196 container died 5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 13:32:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:32:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:07.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:32:07 np0005596060 nova_compute[247421]: 2026-01-26 18:32:07.395 247428 INFO nova.virt.libvirt.driver [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Beginning cold snapshot process#033[00m
Jan 26 13:32:07 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913-userdata-shm.mount: Deactivated successfully.
Jan 26 13:32:07 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8d2b953a2b6457af4d8c1c97a0960cb2301b4fff660cbbebe5c189a94911c1fc-merged.mount: Deactivated successfully.
Jan 26 13:32:07 np0005596060 nova_compute[247421]: 2026-01-26 18:32:07.912 247428 DEBUG nova.virt.libvirt.imagebackend [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] No parent info for 57de5960-c1c5-4cfa-af34-8f58cf25f585; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 26 13:32:08 np0005596060 nova_compute[247421]: 2026-01-26 18:32:08.307 247428 DEBUG nova.storage.rbd_utils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] creating snapshot(76cbb7fbff424318b7296dc3516dfc57) on rbd image(ebdb1528-b5f5-4593-8801-7a25fc358497_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 26 13:32:08 np0005596060 podman[287670]: 2026-01-26 18:32:08.484808179 +0000 UTC m=+1.633052877 container cleanup 5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 13:32:08 np0005596060 nova_compute[247421]: 2026-01-26 18:32:08.490 247428 DEBUG nova.compute.manager [req-3e6e92fe-b813-42be-b0b4-35a310e59e43 req-c360e5c8-ee13-4375-9355-c946cf8d776c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-vif-unplugged-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:32:08 np0005596060 nova_compute[247421]: 2026-01-26 18:32:08.491 247428 DEBUG oslo_concurrency.lockutils [req-3e6e92fe-b813-42be-b0b4-35a310e59e43 req-c360e5c8-ee13-4375-9355-c946cf8d776c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:08 np0005596060 nova_compute[247421]: 2026-01-26 18:32:08.491 247428 DEBUG oslo_concurrency.lockutils [req-3e6e92fe-b813-42be-b0b4-35a310e59e43 req-c360e5c8-ee13-4375-9355-c946cf8d776c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:08 np0005596060 nova_compute[247421]: 2026-01-26 18:32:08.492 247428 DEBUG oslo_concurrency.lockutils [req-3e6e92fe-b813-42be-b0b4-35a310e59e43 req-c360e5c8-ee13-4375-9355-c946cf8d776c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:08 np0005596060 nova_compute[247421]: 2026-01-26 18:32:08.492 247428 DEBUG nova.compute.manager [req-3e6e92fe-b813-42be-b0b4-35a310e59e43 req-c360e5c8-ee13-4375-9355-c946cf8d776c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] No waiting events found dispatching network-vif-unplugged-ca62000c-903a-41ab-abeb-c6427e62fa46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:32:08 np0005596060 nova_compute[247421]: 2026-01-26 18:32:08.492 247428 WARNING nova.compute.manager [req-3e6e92fe-b813-42be-b0b4-35a310e59e43 req-c360e5c8-ee13-4375-9355-c946cf8d776c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received unexpected event network-vif-unplugged-ca62000c-903a-41ab-abeb-c6427e62fa46 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 26 13:32:08 np0005596060 systemd[1]: libpod-conmon-5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913.scope: Deactivated successfully.
Jan 26 13:32:08 np0005596060 podman[287763]: 2026-01-26 18:32:08.613922116 +0000 UTC m=+0.104334825 container remove 5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 13:32:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 200 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 6.5 KiB/s rd, 40 KiB/s wr, 5 op/s
Jan 26 13:32:08 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:08.621 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[38229937-b175-442f-8d33-6faff9921db7]: (4, ('Mon Jan 26 06:32:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b (5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913)\n5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913\nMon Jan 26 06:32:08 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b (5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913)\n5c6aa8eaadd76ecf6f97f5b52208915c56bb1aebbc6d3fcddfabf68570cb7913\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:08 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:08.622 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4223fc-6bf3-4b5a-b68a-f126d5ebee7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:08 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:08.623 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde54f204-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:32:08 np0005596060 nova_compute[247421]: 2026-01-26 18:32:08.625 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:08 np0005596060 kernel: tapde54f204-70: left promiscuous mode
Jan 26 13:32:08 np0005596060 nova_compute[247421]: 2026-01-26 18:32:08.642 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:08 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:08.645 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[958ef1f6-6cec-4819-9001-923d65e9d791]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:08 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:08.662 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d7189a-048c-4596-a5ce-085148e68fd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:08 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:08.663 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[adeb7f43-09be-4329-948f-a764a99703bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:08 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:08.680 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[75bfb5d8-bd02-486b-b840-dc24ffc760a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606092, 'reachable_time': 37217, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287782, 'error': None, 'target': 'ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:08 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:08.682 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:32:08 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:08.682 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[7d27922c-eca3-4b3f-ae98-cc73beeff225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:08 np0005596060 systemd[1]: run-netns-ovnmeta\x2dde54f204\x2d706b\x2d4f67\x2d80ee\x2d0be6151f732b.mount: Deactivated successfully.
Jan 26 13:32:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Jan 26 13:32:09 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Jan 26 13:32:09 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Jan 26 13:32:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:09.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:09.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:09 np0005596060 nova_compute[247421]: 2026-01-26 18:32:09.334 247428 DEBUG nova.storage.rbd_utils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] cloning vms/ebdb1528-b5f5-4593-8801-7a25fc358497_disk@76cbb7fbff424318b7296dc3516dfc57 to images/301fa383-abdf-41f5-a256-6a98f149d04c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 26 13:32:09 np0005596060 nova_compute[247421]: 2026-01-26 18:32:09.955 247428 DEBUG nova.storage.rbd_utils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] flattening images/301fa383-abdf-41f5-a256-6a98f149d04c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 26 13:32:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Jan 26 13:32:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 200 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 7.8 KiB/s rd, 34 KiB/s wr, 6 op/s
Jan 26 13:32:10 np0005596060 nova_compute[247421]: 2026-01-26 18:32:10.684 247428 DEBUG nova.compute.manager [req-a7f34abf-5318-41ab-b43c-0309fdc0b4cc req-bde6662b-6c87-46cb-9102-b40a565460d8 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:32:10 np0005596060 nova_compute[247421]: 2026-01-26 18:32:10.685 247428 DEBUG oslo_concurrency.lockutils [req-a7f34abf-5318-41ab-b43c-0309fdc0b4cc req-bde6662b-6c87-46cb-9102-b40a565460d8 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:10 np0005596060 nova_compute[247421]: 2026-01-26 18:32:10.685 247428 DEBUG oslo_concurrency.lockutils [req-a7f34abf-5318-41ab-b43c-0309fdc0b4cc req-bde6662b-6c87-46cb-9102-b40a565460d8 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:10 np0005596060 nova_compute[247421]: 2026-01-26 18:32:10.685 247428 DEBUG oslo_concurrency.lockutils [req-a7f34abf-5318-41ab-b43c-0309fdc0b4cc req-bde6662b-6c87-46cb-9102-b40a565460d8 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:10 np0005596060 nova_compute[247421]: 2026-01-26 18:32:10.685 247428 DEBUG nova.compute.manager [req-a7f34abf-5318-41ab-b43c-0309fdc0b4cc req-bde6662b-6c87-46cb-9102-b40a565460d8 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] No waiting events found dispatching network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:32:10 np0005596060 nova_compute[247421]: 2026-01-26 18:32:10.686 247428 WARNING nova.compute.manager [req-a7f34abf-5318-41ab-b43c-0309fdc0b4cc req-bde6662b-6c87-46cb-9102-b40a565460d8 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received unexpected event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 26 13:32:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Jan 26 13:32:10 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Jan 26 13:32:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:11 np0005596060 nova_compute[247421]: 2026-01-26 18:32:11.174 247428 DEBUG nova.storage.rbd_utils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] removing snapshot(76cbb7fbff424318b7296dc3516dfc57) on rbd image(ebdb1528-b5f5-4593-8801-7a25fc358497_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 26 13:32:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:11.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:11.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Jan 26 13:32:11 np0005596060 nova_compute[247421]: 2026-01-26 18:32:11.921 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Jan 26 13:32:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Jan 26 13:32:12 np0005596060 nova_compute[247421]: 2026-01-26 18:32:12.050 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:12 np0005596060 nova_compute[247421]: 2026-01-26 18:32:12.267 247428 DEBUG nova.storage.rbd_utils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] creating snapshot(snap) on rbd image(301fa383-abdf-41f5-a256-6a98f149d04c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 26 13:32:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 231 MiB data, 444 MiB used, 21 GiB / 21 GiB avail; 893 KiB/s rd, 2.9 MiB/s wr, 75 op/s
Jan 26 13:32:12 np0005596060 nova_compute[247421]: 2026-01-26 18:32:12.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Jan 26 13:32:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Jan 26 13:32:13 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Jan 26 13:32:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:13.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:13.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:13 np0005596060 podman[287875]: 2026-01-26 18:32:13.795061844 +0000 UTC m=+0.056702848 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:32:13 np0005596060 podman[287876]: 2026-01-26 18:32:13.833972163 +0000 UTC m=+0.093274388 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 26 13:32:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:32:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:32:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:32:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:32:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:32:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:32:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 8.7 MiB/s rd, 8.6 MiB/s wr, 206 op/s
Jan 26 13:32:14 np0005596060 nova_compute[247421]: 2026-01-26 18:32:14.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:14 np0005596060 nova_compute[247421]: 2026-01-26 18:32:14.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:32:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:14.760 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:14.761 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:14.761 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:32:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:15.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:32:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:32:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:15.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:32:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:16 np0005596060 nova_compute[247421]: 2026-01-26 18:32:16.500 247428 INFO nova.virt.libvirt.driver [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Snapshot image upload complete#033[00m
Jan 26 13:32:16 np0005596060 nova_compute[247421]: 2026-01-26 18:32:16.501 247428 DEBUG nova.compute.manager [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:32:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 191 op/s
Jan 26 13:32:16 np0005596060 nova_compute[247421]: 2026-01-26 18:32:16.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:16 np0005596060 nova_compute[247421]: 2026-01-26 18:32:16.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:16 np0005596060 nova_compute[247421]: 2026-01-26 18:32:16.957 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:17 np0005596060 nova_compute[247421]: 2026-01-26 18:32:17.052 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:17.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:17.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:18 np0005596060 nova_compute[247421]: 2026-01-26 18:32:18.294 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 6.1 MiB/s wr, 166 op/s
Jan 26 13:32:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:32:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:19.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:32:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:19.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.4 MiB/s wr, 97 op/s
Jan 26 13:32:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Jan 26 13:32:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Jan 26 13:32:20 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Jan 26 13:32:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:21.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:21.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:21 np0005596060 nova_compute[247421]: 2026-01-26 18:32:21.822 247428 INFO nova.compute.manager [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Shelve offloading#033[00m
Jan 26 13:32:21 np0005596060 nova_compute[247421]: 2026-01-26 18:32:21.830 247428 INFO nova.virt.libvirt.driver [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance destroyed successfully.#033[00m
Jan 26 13:32:21 np0005596060 nova_compute[247421]: 2026-01-26 18:32:21.830 247428 DEBUG nova.compute.manager [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:32:21 np0005596060 nova_compute[247421]: 2026-01-26 18:32:21.832 247428 DEBUG oslo_concurrency.lockutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:32:21 np0005596060 nova_compute[247421]: 2026-01-26 18:32:21.833 247428 DEBUG oslo_concurrency.lockutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquired lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:32:21 np0005596060 nova_compute[247421]: 2026-01-26 18:32:21.833 247428 DEBUG nova.network.neutron [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:32:21 np0005596060 nova_compute[247421]: 2026-01-26 18:32:21.874 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769452326.8726203, ebdb1528-b5f5-4593-8801-7a25fc358497 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:32:21 np0005596060 nova_compute[247421]: 2026-01-26 18:32:21.874 247428 INFO nova.compute.manager [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:32:21 np0005596060 nova_compute[247421]: 2026-01-26 18:32:21.958 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:22 np0005596060 nova_compute[247421]: 2026-01-26 18:32:22.054 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 540 B/s wr, 17 op/s
Jan 26 13:32:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:23.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:23.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:24 np0005596060 nova_compute[247421]: 2026-01-26 18:32:24.417 247428 DEBUG nova.compute.manager [None req-ec39ff5f-1b35-47b7-90f5-52122fa28599 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:32:24 np0005596060 nova_compute[247421]: 2026-01-26 18:32:24.423 247428 DEBUG nova.compute.manager [None req-ec39ff5f-1b35-47b7-90f5-52122fa28599 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:32:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 511 B/s wr, 16 op/s
Jan 26 13:32:24 np0005596060 nova_compute[247421]: 2026-01-26 18:32:24.635 247428 INFO nova.compute.manager [None req-ec39ff5f-1b35-47b7-90f5-52122fa28599 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Jan 26 13:32:24 np0005596060 nova_compute[247421]: 2026-01-26 18:32:24.751 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:24 np0005596060 nova_compute[247421]: 2026-01-26 18:32:24.752 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:24 np0005596060 nova_compute[247421]: 2026-01-26 18:32:24.947 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:24 np0005596060 nova_compute[247421]: 2026-01-26 18:32:24.948 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:32:24 np0005596060 nova_compute[247421]: 2026-01-26 18:32:24.948 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:32:25 np0005596060 nova_compute[247421]: 2026-01-26 18:32:25.096 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:32:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:25.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 307 B/s wr, 13 op/s
Jan 26 13:32:26 np0005596060 nova_compute[247421]: 2026-01-26 18:32:26.961 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:27 np0005596060 nova_compute[247421]: 2026-01-26 18:32:27.087 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:27.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:32:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:32:28 np0005596060 nova_compute[247421]: 2026-01-26 18:32:28.254 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 11 KiB/s wr, 2 op/s
Jan 26 13:32:28 np0005596060 nova_compute[247421]: 2026-01-26 18:32:28.870 247428 DEBUG nova.network.neutron [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:32:29 np0005596060 nova_compute[247421]: 2026-01-26 18:32:29.185 247428 DEBUG oslo_concurrency.lockutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Releasing lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:32:29 np0005596060 nova_compute[247421]: 2026-01-26 18:32:29.187 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquired lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:32:29 np0005596060 nova_compute[247421]: 2026-01-26 18:32:29.187 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 26 13:32:29 np0005596060 nova_compute[247421]: 2026-01-26 18:32:29.188 247428 DEBUG nova.objects.instance [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:29.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 511 B/s rd, 11 KiB/s wr, 2 op/s
Jan 26 13:32:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:31.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:32:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:31.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:32:31 np0005596060 nova_compute[247421]: 2026-01-26 18:32:31.962 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:31 np0005596060 nova_compute[247421]: 2026-01-26 18:32:31.966 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:31 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:31.965 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:32:31 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:31.966 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:32:32 np0005596060 nova_compute[247421]: 2026-01-26 18:32:32.088 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 438 B/s rd, 9.1 KiB/s wr, 1 op/s
Jan 26 13:32:32 np0005596060 nova_compute[247421]: 2026-01-26 18:32:32.713 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:32.968 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.057 247428 INFO nova.virt.libvirt.driver [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance destroyed successfully.#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.058 247428 DEBUG nova.objects.instance [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'resources' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.078 247428 DEBUG nova.virt.libvirt.vif [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:31:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1465409842',display_name='tempest-TestShelveInstance-server-1465409842',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1465409842',id=22,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBArPa0GPQW3updI5wEeWfHenCcjGPGWD88434ubT+vOQr3X0Eo9eIdeVp23Kl758az+2Tg1EnoD3gvKGqOjgjRSe43W1eqMdMcY+qIEIlduzaNHNym4w1xAu5VTrRKiBeQ==',key_name='tempest-TestShelveInstance-1450425907',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:31:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d2387917610d4d928d60d38ade9e3305',ramdisk_id='',reservation_id='r-b1yl1dsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1084421254',owner_user_name='tempest-TestShelveInstance-1084421254-project-member',shelved_at='2026-01-26T18:32:16.500997',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='301fa383-abdf-41f5-a256-6a98f149d04c'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:32:07Z,user_data=None,user_id='6dd15a25d55a4c818b4f121ca4c79ac7',uuid=ebdb1528-b5f5-4593-8801-7a25fc358497,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.079 247428 DEBUG nova.network.os_vif_util [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converting VIF {"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.079 247428 DEBUG nova.network.os_vif_util [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.080 247428 DEBUG os_vif [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.081 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.081 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca62000c-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.083 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.085 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.086 247428 INFO os_vif [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90')#033[00m
Jan 26 13:32:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:33.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.270 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:32:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:32:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:33.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.443 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Releasing lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.443 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.443 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.444 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.444 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.527 247428 DEBUG nova.compute.manager [req-c0f5b7ee-9600-479e-9dc5-b423d72744a5 req-1cbda15b-ed0c-44e0-8076-6f333dd3e339 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-changed-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.528 247428 DEBUG nova.compute.manager [req-c0f5b7ee-9600-479e-9dc5-b423d72744a5 req-1cbda15b-ed0c-44e0-8076-6f333dd3e339 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Refreshing instance network info cache due to event network-changed-ca62000c-903a-41ab-abeb-c6427e62fa46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.528 247428 DEBUG oslo_concurrency.lockutils [req-c0f5b7ee-9600-479e-9dc5-b423d72744a5 req-1cbda15b-ed0c-44e0-8076-6f333dd3e339 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.529 247428 DEBUG oslo_concurrency.lockutils [req-c0f5b7ee-9600-479e-9dc5-b423d72744a5 req-1cbda15b-ed0c-44e0-8076-6f333dd3e339 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.530 247428 DEBUG nova.network.neutron [req-c0f5b7ee-9600-479e-9dc5-b423d72744a5 req-1cbda15b-ed0c-44e0-8076-6f333dd3e339 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Refreshing network info cache for port ca62000c-903a-41ab-abeb-c6427e62fa46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.610 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.610 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.610 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.611 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:32:33 np0005596060 nova_compute[247421]: 2026-01-26 18:32:33.611 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:32:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:32:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1733185684' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.127 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.373 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.373 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.504 247428 INFO nova.virt.libvirt.driver [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Deleting instance files /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497_del#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.505 247428 INFO nova.virt.libvirt.driver [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Deletion of /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497_del complete#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.535 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.537 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4688MB free_disk=20.942455291748047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.537 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.538 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 279 MiB data, 474 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 8.8 KiB/s wr, 1 op/s
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.854 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.854 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.880 247428 INFO nova.scheduler.client.report [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Deleted allocations for instance ebdb1528-b5f5-4593-8801-7a25fc358497#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.954 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing inventories for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.976 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating ProviderTree inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 26 13:32:34 np0005596060 nova_compute[247421]: 2026-01-26 18:32:34.977 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.003 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing aggregate associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.048 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing trait associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.067 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.094 247428 DEBUG oslo_concurrency.lockutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:35.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:32:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789130340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.517 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.523 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.854 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.890 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.890 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.891 247428 DEBUG oslo_concurrency.lockutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.891 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.891 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.923 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.923 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.930 247428 DEBUG oslo_concurrency.processutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:32:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.959 247428 DEBUG nova.network.neutron [req-c0f5b7ee-9600-479e-9dc5-b423d72744a5 req-1cbda15b-ed0c-44e0-8076-6f333dd3e339 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updated VIF entry in instance network info cache for port ca62000c-903a-41ab-abeb-c6427e62fa46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:32:35 np0005596060 nova_compute[247421]: 2026-01-26 18:32:35.960 247428 DEBUG nova.network.neutron [req-c0f5b7ee-9600-479e-9dc5-b423d72744a5 req-1cbda15b-ed0c-44e0-8076-6f333dd3e339 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": null, "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapca62000c-90", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:32:36 np0005596060 nova_compute[247421]: 2026-01-26 18:32:36.017 247428 DEBUG oslo_concurrency.lockutils [req-c0f5b7ee-9600-479e-9dc5-b423d72744a5 req-1cbda15b-ed0c-44e0-8076-6f333dd3e339 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:32:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:32:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/794555463' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:32:36 np0005596060 nova_compute[247421]: 2026-01-26 18:32:36.365 247428 DEBUG oslo_concurrency.processutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:32:36 np0005596060 nova_compute[247421]: 2026-01-26 18:32:36.371 247428 DEBUG nova.compute.provider_tree [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:32:36 np0005596060 nova_compute[247421]: 2026-01-26 18:32:36.394 247428 DEBUG nova.scheduler.client.report [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:32:36 np0005596060 nova_compute[247421]: 2026-01-26 18:32:36.421 247428 DEBUG oslo_concurrency.lockutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:36 np0005596060 nova_compute[247421]: 2026-01-26 18:32:36.494 247428 DEBUG oslo_concurrency.lockutils [None req-cca1a36c-e39c-48b3-9071-88250de15e16 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 33.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 268 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 2.2 KiB/s rd, 9.3 KiB/s wr, 4 op/s
Jan 26 13:32:37 np0005596060 nova_compute[247421]: 2026-01-26 18:32:37.090 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:32:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:32:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:37.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:37 np0005596060 nova_compute[247421]: 2026-01-26 18:32:37.853 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:38 np0005596060 nova_compute[247421]: 2026-01-26 18:32:38.083 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 200 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 29 op/s
Jan 26 13:32:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:39.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:39.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.438 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.439 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.439 247428 INFO nova.compute.manager [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Unshelving#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.588 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.589 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.593 247428 DEBUG nova.objects.instance [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'pci_requests' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.610 247428 DEBUG nova.objects.instance [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'numa_topology' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.626 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.626 247428 INFO nova.compute.claims [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.731 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.732 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 26 13:32:39 np0005596060 nova_compute[247421]: 2026-01-26 18:32:39.810 247428 DEBUG oslo_concurrency.processutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:32:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:32:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/889694425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:32:40 np0005596060 nova_compute[247421]: 2026-01-26 18:32:40.249 247428 DEBUG oslo_concurrency.processutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:32:40 np0005596060 nova_compute[247421]: 2026-01-26 18:32:40.255 247428 DEBUG nova.compute.provider_tree [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:32:40 np0005596060 nova_compute[247421]: 2026-01-26 18:32:40.294 247428 DEBUG nova.scheduler.client.report [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:32:40 np0005596060 nova_compute[247421]: 2026-01-26 18:32:40.323 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:32:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1381562760' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:32:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:32:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1381562760' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:32:40 np0005596060 nova_compute[247421]: 2026-01-26 18:32:40.514 247428 INFO nova.network.neutron [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating port ca62000c-903a-41ab-abeb-c6427e62fa46 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 26 13:32:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 200 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 13:32:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:41.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:41.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.711123) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452361711161, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 893, "num_deletes": 252, "total_data_size": 1328163, "memory_usage": 1351536, "flush_reason": "Manual Compaction"}
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452361721475, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1313112, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38029, "largest_seqno": 38921, "table_properties": {"data_size": 1308585, "index_size": 2179, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10101, "raw_average_key_size": 19, "raw_value_size": 1299433, "raw_average_value_size": 2568, "num_data_blocks": 96, "num_entries": 506, "num_filter_entries": 506, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769452291, "oldest_key_time": 1769452291, "file_creation_time": 1769452361, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 10394 microseconds, and 4815 cpu microseconds.
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.721520) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1313112 bytes OK
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.721538) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.723401) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.723414) EVENT_LOG_v1 {"time_micros": 1769452361723410, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.723429) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1323880, prev total WAL file size 1323880, number of live WAL files 2.
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.723960) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1282KB)], [83(9280KB)]
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452361724007, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 10816333, "oldest_snapshot_seqno": -1}
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6238 keys, 8872281 bytes, temperature: kUnknown
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452361788283, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8872281, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8832261, "index_size": 23334, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 161230, "raw_average_key_size": 25, "raw_value_size": 8721648, "raw_average_value_size": 1398, "num_data_blocks": 930, "num_entries": 6238, "num_filter_entries": 6238, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769452361, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.788569) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8872281 bytes
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.789833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.0 rd, 137.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.1 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(15.0) write-amplify(6.8) OK, records in: 6759, records dropped: 521 output_compression: NoCompression
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.789877) EVENT_LOG_v1 {"time_micros": 1769452361789862, "job": 48, "event": "compaction_finished", "compaction_time_micros": 64382, "compaction_time_cpu_micros": 20448, "output_level": 6, "num_output_files": 1, "total_output_size": 8872281, "num_input_records": 6759, "num_output_records": 6238, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452361790430, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452361792041, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.723866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.792091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.792095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.792097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.792098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:32:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:32:41.792100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:32:42 np0005596060 nova_compute[247421]: 2026-01-26 18:32:42.093 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:42 np0005596060 nova_compute[247421]: 2026-01-26 18:32:42.464 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:32:42 np0005596060 nova_compute[247421]: 2026-01-26 18:32:42.465 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquired lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:32:42 np0005596060 nova_compute[247421]: 2026-01-26 18:32:42.465 247428 DEBUG nova.network.neutron [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:32:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 200 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 13:32:42 np0005596060 nova_compute[247421]: 2026-01-26 18:32:42.741 247428 DEBUG nova.compute.manager [req-9cbc1144-b876-4803-89ee-165578aea279 req-15a58410-3005-4591-87c0-2095297cc45f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-changed-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:32:42 np0005596060 nova_compute[247421]: 2026-01-26 18:32:42.742 247428 DEBUG nova.compute.manager [req-9cbc1144-b876-4803-89ee-165578aea279 req-15a58410-3005-4591-87c0-2095297cc45f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Refreshing instance network info cache due to event network-changed-ca62000c-903a-41ab-abeb-c6427e62fa46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:32:42 np0005596060 nova_compute[247421]: 2026-01-26 18:32:42.742 247428 DEBUG oslo_concurrency.lockutils [req-9cbc1144-b876-4803-89ee-165578aea279 req-15a58410-3005-4591-87c0-2095297cc45f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:32:43 np0005596060 nova_compute[247421]: 2026-01-26 18:32:43.085 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:43.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:43.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:32:44
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'vms', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'volumes', 'backups', 'cephfs.cephfs.meta', 'images']
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 200 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 13:32:44 np0005596060 podman[288147]: 2026-01-26 18:32:44.79204662 +0000 UTC m=+0.052909663 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 26 13:32:44 np0005596060 podman[288148]: 2026-01-26 18:32:44.859149389 +0000 UTC m=+0.116404191 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:32:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:32:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:45.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:32:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:45.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:32:45 np0005596060 nova_compute[247421]: 2026-01-26 18:32:45.813 247428 DEBUG nova.network.neutron [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:32:45 np0005596060 nova_compute[247421]: 2026-01-26 18:32:45.893 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Releasing lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:32:45 np0005596060 nova_compute[247421]: 2026-01-26 18:32:45.895 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:32:45 np0005596060 nova_compute[247421]: 2026-01-26 18:32:45.895 247428 INFO nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Creating image(s)#033[00m
Jan 26 13:32:45 np0005596060 nova_compute[247421]: 2026-01-26 18:32:45.929 247428 DEBUG nova.storage.rbd_utils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:32:45 np0005596060 nova_compute[247421]: 2026-01-26 18:32:45.935 247428 DEBUG nova.objects.instance [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:45 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:45 np0005596060 nova_compute[247421]: 2026-01-26 18:32:45.937 247428 DEBUG oslo_concurrency.lockutils [req-9cbc1144-b876-4803-89ee-165578aea279 req-15a58410-3005-4591-87c0-2095297cc45f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:32:45 np0005596060 nova_compute[247421]: 2026-01-26 18:32:45.937 247428 DEBUG nova.network.neutron [req-9cbc1144-b876-4803-89ee-165578aea279 req-15a58410-3005-4591-87c0-2095297cc45f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Refreshing network info cache for port ca62000c-903a-41ab-abeb-c6427e62fa46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:32:46 np0005596060 nova_compute[247421]: 2026-01-26 18:32:46.107 247428 DEBUG nova.storage.rbd_utils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:32:46 np0005596060 nova_compute[247421]: 2026-01-26 18:32:46.137 247428 DEBUG nova.storage.rbd_utils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:32:46 np0005596060 nova_compute[247421]: 2026-01-26 18:32:46.142 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "7ab67101e1a45a134a8093188de4b0f3ae8b04af" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:46 np0005596060 nova_compute[247421]: 2026-01-26 18:32:46.144 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "7ab67101e1a45a134a8093188de4b0f3ae8b04af" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 200 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 13:32:46 np0005596060 nova_compute[247421]: 2026-01-26 18:32:46.940 247428 DEBUG nova.virt.libvirt.imagebackend [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Image locations are: [{'url': 'rbd://d4cd1917-5876-51b6-bc64-65a16199754d/images/301fa383-abdf-41f5-a256-6a98f149d04c/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d4cd1917-5876-51b6-bc64-65a16199754d/images/301fa383-abdf-41f5-a256-6a98f149d04c/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 26 13:32:47 np0005596060 nova_compute[247421]: 2026-01-26 18:32:47.001 247428 DEBUG nova.virt.libvirt.imagebackend [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Selected location: {'url': 'rbd://d4cd1917-5876-51b6-bc64-65a16199754d/images/301fa383-abdf-41f5-a256-6a98f149d04c/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 26 13:32:47 np0005596060 nova_compute[247421]: 2026-01-26 18:32:47.002 247428 DEBUG nova.storage.rbd_utils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] cloning images/301fa383-abdf-41f5-a256-6a98f149d04c@snap to None/ebdb1528-b5f5-4593-8801-7a25fc358497_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 26 13:32:47 np0005596060 nova_compute[247421]: 2026-01-26 18:32:47.133 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:47.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:47.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:48 np0005596060 nova_compute[247421]: 2026-01-26 18:32:48.088 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 200 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 682 B/s wr, 30 op/s
Jan 26 13:32:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:32:49 np0005596060 nova_compute[247421]: 2026-01-26 18:32:49.265 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "7ab67101e1a45a134a8093188de4b0f3ae8b04af" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:32:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:49.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:32:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:49.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:49 np0005596060 nova_compute[247421]: 2026-01-26 18:32:49.398 247428 DEBUG nova.objects.instance [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'migration_context' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:49 np0005596060 nova_compute[247421]: 2026-01-26 18:32:49.474 247428 DEBUG nova.storage.rbd_utils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] flattening vms/ebdb1528-b5f5-4593-8801-7a25fc358497_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 26 13:32:49 np0005596060 nova_compute[247421]: 2026-01-26 18:32:49.850 247428 DEBUG nova.network.neutron [req-9cbc1144-b876-4803-89ee-165578aea279 req-15a58410-3005-4591-87c0-2095297cc45f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updated VIF entry in instance network info cache for port ca62000c-903a-41ab-abeb-c6427e62fa46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:32:49 np0005596060 nova_compute[247421]: 2026-01-26 18:32:49.851 247428 DEBUG nova.network.neutron [req-9cbc1144-b876-4803-89ee-165578aea279 req-15a58410-3005-4591-87c0-2095297cc45f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:32:49 np0005596060 nova_compute[247421]: 2026-01-26 18:32:49.896 247428 DEBUG oslo_concurrency.lockutils [req-9cbc1144-b876-4803-89ee-165578aea279 req-15a58410-3005-4591-87c0-2095297cc45f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:32:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:32:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:32:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 200 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 4.8 KiB/s rd, 5 op/s
Jan 26 13:32:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:50 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:32:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:51.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Jan 26 13:32:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:32:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:51.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.680 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Image rbd:vms/ebdb1528-b5f5-4593-8801-7a25fc358497_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.681 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.681 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Ensure instance console log exists: /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.681 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.682 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.682 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.684 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Start _get_guest_xml network_info=[{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T18:32:02Z,direct_url=<?>,disk_format='raw',id=301fa383-abdf-41f5-a256-6a98f149d04c,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1465409842-shelved',owner='d2387917610d4d928d60d38ade9e3305',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T18:32:15Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.688 247428 WARNING nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.696 247428 DEBUG nova.virt.libvirt.host [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.697 247428 DEBUG nova.virt.libvirt.host [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.701 247428 DEBUG nova.virt.libvirt.host [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.702 247428 DEBUG nova.virt.libvirt.host [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.703 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.703 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T18:32:02Z,direct_url=<?>,disk_format='raw',id=301fa383-abdf-41f5-a256-6a98f149d04c,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-1465409842-shelved',owner='d2387917610d4d928d60d38ade9e3305',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T18:32:15Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.703 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.703 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.704 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.704 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.704 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.704 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.704 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.704 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.705 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.705 247428 DEBUG nova.virt.hardware [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.705 247428 DEBUG nova.objects.instance [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:51 np0005596060 nova_compute[247421]: 2026-01-26 18:32:51.736 247428 DEBUG oslo_concurrency.processutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:32:51 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 2ec94634-fd4f-4eff-9da5-26a4fddccda6 does not exist
Jan 26 13:32:51 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5dfb220d-4d29-446a-a609-a95e6e9d44ea does not exist
Jan 26 13:32:51 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 12d5b846-c078-4c12-85df-dfdf467bb156 does not exist
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:32:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.135 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:32:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810490857' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.199 247428 DEBUG oslo_concurrency.processutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.235 247428 DEBUG nova.storage.rbd_utils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.239 247428 DEBUG oslo_concurrency.processutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:32:52 np0005596060 podman[288733]: 2026-01-26 18:32:52.53141998 +0000 UTC m=+0.111967989 container create a7c40e91772efe10531064d813652861f4fdaaa2d262e521708b7e321c8bc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:32:52 np0005596060 podman[288733]: 2026-01-26 18:32:52.439729643 +0000 UTC m=+0.020277672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.561 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:52 np0005596060 systemd[1]: Started libpod-conmon-a7c40e91772efe10531064d813652861f4fdaaa2d262e521708b7e321c8bc5fa.scope.
Jan 26 13:32:52 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:32:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 223 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 44 op/s
Jan 26 13:32:52 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:32:52 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:32:52 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:32:52 np0005596060 podman[288733]: 2026-01-26 18:32:52.652038706 +0000 UTC m=+0.232586745 container init a7c40e91772efe10531064d813652861f4fdaaa2d262e521708b7e321c8bc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:32:52 np0005596060 podman[288733]: 2026-01-26 18:32:52.663186686 +0000 UTC m=+0.243734695 container start a7c40e91772efe10531064d813652861f4fdaaa2d262e521708b7e321c8bc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:32:52 np0005596060 podman[288733]: 2026-01-26 18:32:52.668713646 +0000 UTC m=+0.249261685 container attach a7c40e91772efe10531064d813652861f4fdaaa2d262e521708b7e321c8bc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:32:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:32:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704146160' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:32:52 np0005596060 beautiful_elgamal[288749]: 167 167
Jan 26 13:32:52 np0005596060 podman[288733]: 2026-01-26 18:32:52.672649435 +0000 UTC m=+0.253197454 container died a7c40e91772efe10531064d813652861f4fdaaa2d262e521708b7e321c8bc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 26 13:32:52 np0005596060 systemd[1]: libpod-a7c40e91772efe10531064d813652861f4fdaaa2d262e521708b7e321c8bc5fa.scope: Deactivated successfully.
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.692 247428 DEBUG oslo_concurrency.processutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.695 247428 DEBUG nova.virt.libvirt.vif [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T18:31:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1465409842',display_name='tempest-TestShelveInstance-server-1465409842',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1465409842',id=22,image_ref='301fa383-abdf-41f5-a256-6a98f149d04c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1450425907',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:31:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d2387917610d4d928d60d38ade9e3305',ramdisk_id='',reservation_id='r-b1yl1dsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1084421254',owner_user_name='tempest-TestShelveInstance-1084421254-project-member',shelved_at='2026-01-26T18:32:16.500997',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='301fa383-abdf-41f5-a256-6a98f149d04c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:32:39Z,user_data=None,user_id='6dd15a25d55a4c818b4f121ca4c79ac7',uuid=ebdb1528-b5f5-4593-8801-7a25fc358497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.695 247428 DEBUG nova.network.os_vif_util [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converting VIF {"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.696 247428 DEBUG nova.network.os_vif_util [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.697 247428 DEBUG nova.objects.instance [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'pci_devices' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:52 np0005596060 systemd[1]: var-lib-containers-storage-overlay-5cdcf144952fb50172fd1f268fe3f9a476f5979147dace8375d6e14b73f6f6d2-merged.mount: Deactivated successfully.
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.714 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <uuid>ebdb1528-b5f5-4593-8801-7a25fc358497</uuid>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <name>instance-00000016</name>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestShelveInstance-server-1465409842</nova:name>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:32:51</nova:creationTime>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <nova:user uuid="6dd15a25d55a4c818b4f121ca4c79ac7">tempest-TestShelveInstance-1084421254-project-member</nova:user>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <nova:project uuid="d2387917610d4d928d60d38ade9e3305">tempest-TestShelveInstance-1084421254</nova:project>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="301fa383-abdf-41f5-a256-6a98f149d04c"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <nova:port uuid="ca62000c-903a-41ab-abeb-c6427e62fa46">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <entry name="serial">ebdb1528-b5f5-4593-8801-7a25fc358497</entry>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <entry name="uuid">ebdb1528-b5f5-4593-8801-7a25fc358497</entry>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/ebdb1528-b5f5-4593-8801-7a25fc358497_disk">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:8b:ab:0c"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <target dev="tapca62000c-90"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/console.log" append="off"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <input type="keyboard" bus="usb"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:32:52 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:32:52 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:32:52 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:32:52 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.715 247428 DEBUG nova.compute.manager [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Preparing to wait for external event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.716 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.716 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.717 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:52 np0005596060 podman[288733]: 2026-01-26 18:32:52.717295398 +0000 UTC m=+0.297843407 container remove a7c40e91772efe10531064d813652861f4fdaaa2d262e521708b7e321c8bc5fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_elgamal, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.717 247428 DEBUG nova.virt.libvirt.vif [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T18:31:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1465409842',display_name='tempest-TestShelveInstance-server-1465409842',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1465409842',id=22,image_ref='301fa383-abdf-41f5-a256-6a98f149d04c',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-1450425907',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:31:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d2387917610d4d928d60d38ade9e3305',ramdisk_id='',reservation_id='r-b1yl1dsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1084421254',owner_user_name='tempest-TestShelveInstance-1084421254-project-member',shelved_at='2026-01-26T18:32:16.500997',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='301fa383-abdf-41f5-a256-6a98f149d04c'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:32:39Z,user_data=None,user_id='6dd15a25d55a4c818b4f121ca4c79ac7',uuid=ebdb1528-b5f5-4593-8801-7a25fc358497,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.718 247428 DEBUG nova.network.os_vif_util [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converting VIF {"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.718 247428 DEBUG nova.network.os_vif_util [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.719 247428 DEBUG os_vif [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.719 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.719 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.720 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.725 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.725 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca62000c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.726 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapca62000c-90, col_values=(('external_ids', {'iface-id': 'ca62000c-903a-41ab-abeb-c6427e62fa46', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:ab:0c', 'vm-uuid': 'ebdb1528-b5f5-4593-8801-7a25fc358497'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.760 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:52 np0005596060 systemd[1]: libpod-conmon-a7c40e91772efe10531064d813652861f4fdaaa2d262e521708b7e321c8bc5fa.scope: Deactivated successfully.
Jan 26 13:32:52 np0005596060 NetworkManager[48900]: <info>  [1769452372.7641] manager: (tapca62000c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.764 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.865 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.866 247428 INFO os_vif [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90')#033[00m
Jan 26 13:32:52 np0005596060 podman[288776]: 2026-01-26 18:32:52.884439735 +0000 UTC m=+0.044293556 container create c9b10b6be6904826db4e04f81884f348f629587aa3ef803a8fad8d39e8c224bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_satoshi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.884 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.936 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.936 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.937 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] No VIF found with MAC fa:16:3e:8b:ab:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.937 247428 INFO nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Using config drive#033[00m
Jan 26 13:32:52 np0005596060 systemd[1]: Started libpod-conmon-c9b10b6be6904826db4e04f81884f348f629587aa3ef803a8fad8d39e8c224bc.scope.
Jan 26 13:32:52 np0005596060 podman[288776]: 2026-01-26 18:32:52.865680563 +0000 UTC m=+0.025534404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.966 247428 DEBUG nova.storage.rbd_utils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:32:52 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:32:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8015ac866057e213b6c2d210a33d30c655da579ef6ceaba692ba0de11e5a421/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8015ac866057e213b6c2d210a33d30c655da579ef6ceaba692ba0de11e5a421/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8015ac866057e213b6c2d210a33d30c655da579ef6ceaba692ba0de11e5a421/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8015ac866057e213b6c2d210a33d30c655da579ef6ceaba692ba0de11e5a421/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:52 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8015ac866057e213b6c2d210a33d30c655da579ef6ceaba692ba0de11e5a421/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:52 np0005596060 podman[288776]: 2026-01-26 18:32:52.986313109 +0000 UTC m=+0.146166960 container init c9b10b6be6904826db4e04f81884f348f629587aa3ef803a8fad8d39e8c224bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:32:52 np0005596060 nova_compute[247421]: 2026-01-26 18:32:52.992 247428 DEBUG nova.objects.instance [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:52 np0005596060 podman[288776]: 2026-01-26 18:32:52.999116831 +0000 UTC m=+0.158970652 container start c9b10b6be6904826db4e04f81884f348f629587aa3ef803a8fad8d39e8c224bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:32:53 np0005596060 podman[288776]: 2026-01-26 18:32:53.002781313 +0000 UTC m=+0.162635134 container attach c9b10b6be6904826db4e04f81884f348f629587aa3ef803a8fad8d39e8c224bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:32:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:53.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:53.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 26 13:32:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 26 13:32:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 26 13:32:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 26 13:32:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 26 13:32:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 26 13:32:53 np0005596060 nova_compute[247421]: 2026-01-26 18:32:53.493 247428 DEBUG nova.objects.instance [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'keypairs' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:32:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 26 13:32:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Jan 26 13:32:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Jan 26 13:32:53 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Jan 26 13:32:53 np0005596060 compassionate_satoshi[288796]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:32:53 np0005596060 compassionate_satoshi[288796]: --> relative data size: 1.0
Jan 26 13:32:53 np0005596060 compassionate_satoshi[288796]: --> All data devices are unavailable
Jan 26 13:32:53 np0005596060 systemd[1]: libpod-c9b10b6be6904826db4e04f81884f348f629587aa3ef803a8fad8d39e8c224bc.scope: Deactivated successfully.
Jan 26 13:32:53 np0005596060 podman[288776]: 2026-01-26 18:32:53.806520181 +0000 UTC m=+0.966374072 container died c9b10b6be6904826db4e04f81884f348f629587aa3ef803a8fad8d39e8c224bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_satoshi, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:32:53 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c8015ac866057e213b6c2d210a33d30c655da579ef6ceaba692ba0de11e5a421-merged.mount: Deactivated successfully.
Jan 26 13:32:53 np0005596060 podman[288776]: 2026-01-26 18:32:53.863586438 +0000 UTC m=+1.023440259 container remove c9b10b6be6904826db4e04f81884f348f629587aa3ef803a8fad8d39e8c224bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:32:53 np0005596060 systemd[1]: libpod-conmon-c9b10b6be6904826db4e04f81884f348f629587aa3ef803a8fad8d39e8c224bc.scope: Deactivated successfully.
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.113 247428 INFO nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Creating config drive at /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config#033[00m
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.121 247428 DEBUG oslo_concurrency.processutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9antyj_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.259 247428 DEBUG oslo_concurrency.processutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9antyj_" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.287 247428 DEBUG nova.storage.rbd_utils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] rbd image ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.290 247428 DEBUG oslo_concurrency.processutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:32:54 np0005596060 podman[289013]: 2026-01-26 18:32:54.525310711 +0000 UTC m=+0.056637707 container create a468c6cfe643ca35efd0e042a9c38589efda5e5e09a0da833c9e2ed10615cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.540 247428 DEBUG oslo_concurrency.processutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config ebdb1528-b5f5-4593-8801-7a25fc358497_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.542 247428 INFO nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Deleting local config drive /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497/disk.config because it was imported into RBD.#033[00m
Jan 26 13:32:54 np0005596060 systemd[1]: Started libpod-conmon-a468c6cfe643ca35efd0e042a9c38589efda5e5e09a0da833c9e2ed10615cf28.scope.
Jan 26 13:32:54 np0005596060 podman[289013]: 2026-01-26 18:32:54.494546067 +0000 UTC m=+0.025873093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:32:54 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:32:54 np0005596060 kernel: tapca62000c-90: entered promiscuous mode
Jan 26 13:32:54 np0005596060 NetworkManager[48900]: <info>  [1769452374.5942] manager: (tapca62000c-90): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Jan 26 13:32:54 np0005596060 ovn_controller[148842]: 2026-01-26T18:32:54Z|00130|binding|INFO|Claiming lport ca62000c-903a-41ab-abeb-c6427e62fa46 for this chassis.
Jan 26 13:32:54 np0005596060 ovn_controller[148842]: 2026-01-26T18:32:54Z|00131|binding|INFO|ca62000c-903a-41ab-abeb-c6427e62fa46: Claiming fa:16:3e:8b:ab:0c 10.100.0.9
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.597 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.601 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:54 np0005596060 podman[289013]: 2026-01-26 18:32:54.603672763 +0000 UTC m=+0.134999779 container init a468c6cfe643ca35efd0e042a9c38589efda5e5e09a0da833c9e2ed10615cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.607 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:54 np0005596060 podman[289013]: 2026-01-26 18:32:54.612429443 +0000 UTC m=+0.143756439 container start a468c6cfe643ca35efd0e042a9c38589efda5e5e09a0da833c9e2ed10615cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.612 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:54 np0005596060 NetworkManager[48900]: <info>  [1769452374.6154] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Jan 26 13:32:54 np0005596060 podman[289013]: 2026-01-26 18:32:54.616719811 +0000 UTC m=+0.148046807 container attach a468c6cfe643ca35efd0e042a9c38589efda5e5e09a0da833c9e2ed10615cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 13:32:54 np0005596060 NetworkManager[48900]: <info>  [1769452374.6169] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Jan 26 13:32:54 np0005596060 loving_germain[289034]: 167 167
Jan 26 13:32:54 np0005596060 systemd[1]: libpod-a468c6cfe643ca35efd0e042a9c38589efda5e5e09a0da833c9e2ed10615cf28.scope: Deactivated successfully.
Jan 26 13:32:54 np0005596060 podman[289013]: 2026-01-26 18:32:54.62423738 +0000 UTC m=+0.155564376 container died a468c6cfe643ca35efd0e042a9c38589efda5e5e09a0da833c9e2ed10615cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 13:32:54 np0005596060 systemd-machined[213879]: New machine qemu-11-instance-00000016.
Jan 26 13:32:54 np0005596060 systemd-udevd[289053]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:32:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 278 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 138 op/s
Jan 26 13:32:54 np0005596060 NetworkManager[48900]: <info>  [1769452374.6516] device (tapca62000c-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:32:54 np0005596060 NetworkManager[48900]: <info>  [1769452374.6530] device (tapca62000c-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:32:54 np0005596060 systemd[1]: Started Virtual Machine qemu-11-instance-00000016.
Jan 26 13:32:54 np0005596060 systemd[1]: var-lib-containers-storage-overlay-dbd49e1ce6313aa47427893f5bdad8a70ed56259f9385df1287546c66dbacfb5-merged.mount: Deactivated successfully.
Jan 26 13:32:54 np0005596060 podman[289013]: 2026-01-26 18:32:54.680820085 +0000 UTC m=+0.212147081 container remove a468c6cfe643ca35efd0e042a9c38589efda5e5e09a0da833c9e2ed10615cf28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 26 13:32:54 np0005596060 systemd[1]: libpod-conmon-a468c6cfe643ca35efd0e042a9c38589efda5e5e09a0da833c9e2ed10615cf28.scope: Deactivated successfully.
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.785 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:ab:0c 10.100.0.9'], port_security=['fa:16:3e:8b:ab:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ebdb1528-b5f5-4593-8801-7a25fc358497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de54f204-706b-4f67-80ee-0be6151f732b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2387917610d4d928d60d38ade9e3305', 'neutron:revision_number': '8', 'neutron:security_group_ids': '1cf612df-2e43-4b29-bdb2-6253f8c086ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f66771a-4d2d-438c-ad16-4a45d6686a0f, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=ca62000c-903a-41ab-abeb-c6427e62fa46) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.788 159331 INFO neutron.agent.ovn.metadata.agent [-] Port ca62000c-903a-41ab-abeb-c6427e62fa46 in datapath de54f204-706b-4f67-80ee-0be6151f732b bound to our chassis#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.789 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de54f204-706b-4f67-80ee-0be6151f732b#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.802 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[cd51191b-a46f-4c72-ae2a-504957e43e0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.803 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapde54f204-71 in ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.805 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapde54f204-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.805 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[3871ca65-a112-4d06-9112-cfbc2b5802b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.806 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[bba5e88c-ab12-4689-82e9-96fd157ca17d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.819 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[8423aca0-d4d0-4c7c-89e8-436531fd01a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.846 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.853 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2fdd84-c922-4904-99d9-86f7d37e8e1c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.864 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:54 np0005596060 podman[289079]: 2026-01-26 18:32:54.871570595 +0000 UTC m=+0.045076365 container create 4b090674fc0ff1c33ded472ab9f95a1ffc512dc045b86c74feab563bd968974a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khayyam, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 26 13:32:54 np0005596060 ovn_controller[148842]: 2026-01-26T18:32:54Z|00132|binding|INFO|Setting lport ca62000c-903a-41ab-abeb-c6427e62fa46 ovn-installed in OVS
Jan 26 13:32:54 np0005596060 ovn_controller[148842]: 2026-01-26T18:32:54Z|00133|binding|INFO|Setting lport ca62000c-903a-41ab-abeb-c6427e62fa46 up in Southbound
Jan 26 13:32:54 np0005596060 nova_compute[247421]: 2026-01-26 18:32:54.878 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.885 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[992e6965-c9e0-4d61-89bb-c7748c9a1809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:54 np0005596060 systemd-udevd[289061]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:32:54 np0005596060 NetworkManager[48900]: <info>  [1769452374.8936] manager: (tapde54f204-70): new Veth device (/org/freedesktop/NetworkManager/Devices/74)
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.894 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3c6a59-cf23-44fb-a449-c6ac7d83d4e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.934 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[e85dec91-b460-4813-81a1-c2e949a1570c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.938 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[c59bb717-7407-4f43-9f01-bd609dad6f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:54 np0005596060 systemd[1]: Started libpod-conmon-4b090674fc0ff1c33ded472ab9f95a1ffc512dc045b86c74feab563bd968974a.scope.
Jan 26 13:32:54 np0005596060 podman[289079]: 2026-01-26 18:32:54.853551872 +0000 UTC m=+0.027057662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:32:54 np0005596060 NetworkManager[48900]: <info>  [1769452374.9667] device (tapde54f204-70): carrier: link connected
Jan 26 13:32:54 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.973 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[d19f0aa4-e137-4b4d-8495-7c5e5e138096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe297da6ab6e5d8e8a8d7beac7251ff5ee50af5db5c502d66e95b3e611d918f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe297da6ab6e5d8e8a8d7beac7251ff5ee50af5db5c502d66e95b3e611d918f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe297da6ab6e5d8e8a8d7beac7251ff5ee50af5db5c502d66e95b3e611d918f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:54 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efe297da6ab6e5d8e8a8d7beac7251ff5ee50af5db5c502d66e95b3e611d918f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:54 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:54.992 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[de750065-058d-4307-b8e2-54c09a3c81ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde54f204-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:c6:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613954, 'reachable_time': 15183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289120, 'error': None, 'target': 'ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:55 np0005596060 podman[289079]: 2026-01-26 18:32:55.000606263 +0000 UTC m=+0.174112053 container init 4b090674fc0ff1c33ded472ab9f95a1ffc512dc045b86c74feab563bd968974a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khayyam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:32:55 np0005596060 podman[289079]: 2026-01-26 18:32:55.009673731 +0000 UTC m=+0.183179501 container start 4b090674fc0ff1c33ded472ab9f95a1ffc512dc045b86c74feab563bd968974a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.010 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[62cb730f-17e8-451a-b92c-77dc0c144f64]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:c618'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 613954, 'tstamp': 613954}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289121, 'error': None, 'target': 'ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:55 np0005596060 podman[289079]: 2026-01-26 18:32:55.01239494 +0000 UTC m=+0.185900740 container attach 4b090674fc0ff1c33ded472ab9f95a1ffc512dc045b86c74feab563bd968974a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khayyam, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.031 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[2dbe2a2e-9419-4d0d-96a4-6ab501fc1e2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde54f204-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:c6:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613954, 'reachable_time': 15183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289124, 'error': None, 'target': 'ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.064 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[833762bd-8485-4225-b208-284ef373f49c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.128 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7b8c231e-8167-4d9a-8f37-eca80f630d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.129 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde54f204-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.130 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.130 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde54f204-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:32:55 np0005596060 NetworkManager[48900]: <info>  [1769452375.1329] manager: (tapde54f204-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 26 13:32:55 np0005596060 kernel: tapde54f204-70: entered promiscuous mode
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.136 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde54f204-70, col_values=(('external_ids', {'iface-id': 'c2c971c3-99f6-4118-be80-725c9fa469d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:32:55 np0005596060 ovn_controller[148842]: 2026-01-26T18:32:55Z|00134|binding|INFO|Releasing lport c2c971c3-99f6-4118-be80-725c9fa469d2 from this chassis (sb_readonly=0)
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.150 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.157 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.158 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.158 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/de54f204-706b-4f67-80ee-0be6151f732b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/de54f204-706b-4f67-80ee-0be6151f732b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.159 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[80aa1be1-3de7-48aa-aa3d-2575053b7fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.160 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-de54f204-706b-4f67-80ee-0be6151f732b
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/de54f204-706b-4f67-80ee-0be6151f732b.pid.haproxy
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID de54f204-706b-4f67-80ee-0be6151f732b
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:32:55 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:32:55.160 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b', 'env', 'PROCESS_TAG=haproxy-de54f204-706b-4f67-80ee-0be6151f732b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/de54f204-706b-4f67-80ee-0be6151f732b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:32:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:55.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.003000076s ======
Jan 26 13:32:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:55.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000076s
Jan 26 13:32:55 np0005596060 podman[289193]: 2026-01-26 18:32:55.530570151 +0000 UTC m=+0.059778046 container create bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.553 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452375.5529473, ebdb1528-b5f5-4593-8801-7a25fc358497 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.554 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] VM Started (Lifecycle Event)#033[00m
Jan 26 13:32:55 np0005596060 systemd[1]: Started libpod-conmon-bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3.scope.
Jan 26 13:32:55 np0005596060 podman[289193]: 2026-01-26 18:32:55.493134419 +0000 UTC m=+0.022342344 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.595 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:32:55 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.601 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452375.5531428, ebdb1528-b5f5-4593-8801-7a25fc358497 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:32:55 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dba1549cd837a2b92d197a634cd29741287597e00889bf733b738f2aa3dcd8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.611 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:32:55 np0005596060 podman[289193]: 2026-01-26 18:32:55.62591553 +0000 UTC m=+0.155123455 container init bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 13:32:55 np0005596060 podman[289193]: 2026-01-26 18:32:55.633592694 +0000 UTC m=+0.162800579 container start bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.637 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.646 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:32:55 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[289213]: [NOTICE]   (289217) : New worker (289219) forked
Jan 26 13:32:55 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[289213]: [NOTICE]   (289217) : Loading success.
Jan 26 13:32:55 np0005596060 nova_compute[247421]: 2026-01-26 18:32:55.670 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]: {
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:    "1": [
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:        {
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "devices": [
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "/dev/loop3"
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            ],
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "lv_name": "ceph_lv0",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "lv_size": "7511998464",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "name": "ceph_lv0",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "tags": {
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.cluster_name": "ceph",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.crush_device_class": "",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.encrypted": "0",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.osd_id": "1",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.type": "block",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:                "ceph.vdo": "0"
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            },
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "type": "block",
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:            "vg_name": "ceph_vg0"
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:        }
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]:    ]
Jan 26 13:32:55 np0005596060 clever_khayyam[289116]: }
Jan 26 13:32:55 np0005596060 systemd[1]: libpod-4b090674fc0ff1c33ded472ab9f95a1ffc512dc045b86c74feab563bd968974a.scope: Deactivated successfully.
Jan 26 13:32:55 np0005596060 podman[289232]: 2026-01-26 18:32:55.844397029 +0000 UTC m=+0.026690103 container died 4b090674fc0ff1c33ded472ab9f95a1ffc512dc045b86c74feab563bd968974a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:32:55 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:32:55 np0005596060 systemd[1]: var-lib-containers-storage-overlay-efe297da6ab6e5d8e8a8d7beac7251ff5ee50af5db5c502d66e95b3e611d918f-merged.mount: Deactivated successfully.
Jan 26 13:32:55 np0005596060 podman[289232]: 2026-01-26 18:32:55.969383785 +0000 UTC m=+0.151676849 container remove 4b090674fc0ff1c33ded472ab9f95a1ffc512dc045b86c74feab563bd968974a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 13:32:55 np0005596060 systemd[1]: libpod-conmon-4b090674fc0ff1c33ded472ab9f95a1ffc512dc045b86c74feab563bd968974a.scope: Deactivated successfully.
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.035 247428 DEBUG nova.compute.manager [req-012b7dcf-4b24-42ca-8f55-07ced030c6f3 req-055c5850-34de-4447-aba8-a1e257014441 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.035 247428 DEBUG oslo_concurrency.lockutils [req-012b7dcf-4b24-42ca-8f55-07ced030c6f3 req-055c5850-34de-4447-aba8-a1e257014441 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.036 247428 DEBUG oslo_concurrency.lockutils [req-012b7dcf-4b24-42ca-8f55-07ced030c6f3 req-055c5850-34de-4447-aba8-a1e257014441 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.036 247428 DEBUG oslo_concurrency.lockutils [req-012b7dcf-4b24-42ca-8f55-07ced030c6f3 req-055c5850-34de-4447-aba8-a1e257014441 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.036 247428 DEBUG nova.compute.manager [req-012b7dcf-4b24-42ca-8f55-07ced030c6f3 req-055c5850-34de-4447-aba8-a1e257014441 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Processing event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.037 247428 DEBUG nova.compute.manager [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.041 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452376.0409584, ebdb1528-b5f5-4593-8801-7a25fc358497 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.041 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.043 247428 DEBUG nova.virt.libvirt.driver [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.047 247428 INFO nova.virt.libvirt.driver [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance spawned successfully.#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.065 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.070 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.120 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.288 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.331 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Triggering sync for uuid ebdb1528-b5f5-4593-8801-7a25fc358497 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 26 13:32:56 np0005596060 nova_compute[247421]: 2026-01-26 18:32:56.332 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:56 np0005596060 podman[289389]: 2026-01-26 18:32:56.627744424 +0000 UTC m=+0.041162207 container create 2b2163aa1f07491a2c88c759b2fd86c8d817181af44bda50f1c63d64ae4cc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_volhard, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 13:32:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 279 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 5.8 MiB/s wr, 218 op/s
Jan 26 13:32:56 np0005596060 systemd[1]: Started libpod-conmon-2b2163aa1f07491a2c88c759b2fd86c8d817181af44bda50f1c63d64ae4cc13f.scope.
Jan 26 13:32:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Jan 26 13:32:56 np0005596060 podman[289389]: 2026-01-26 18:32:56.609388432 +0000 UTC m=+0.022806245 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:32:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Jan 26 13:32:56 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:32:56 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Jan 26 13:32:56 np0005596060 podman[289389]: 2026-01-26 18:32:56.738791429 +0000 UTC m=+0.152209232 container init 2b2163aa1f07491a2c88c759b2fd86c8d817181af44bda50f1c63d64ae4cc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 13:32:56 np0005596060 podman[289389]: 2026-01-26 18:32:56.747269872 +0000 UTC m=+0.160687655 container start 2b2163aa1f07491a2c88c759b2fd86c8d817181af44bda50f1c63d64ae4cc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_volhard, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 13:32:56 np0005596060 upbeat_volhard[289405]: 167 167
Jan 26 13:32:56 np0005596060 systemd[1]: libpod-2b2163aa1f07491a2c88c759b2fd86c8d817181af44bda50f1c63d64ae4cc13f.scope: Deactivated successfully.
Jan 26 13:32:56 np0005596060 podman[289389]: 2026-01-26 18:32:56.759834778 +0000 UTC m=+0.173252581 container attach 2b2163aa1f07491a2c88c759b2fd86c8d817181af44bda50f1c63d64ae4cc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_volhard, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:32:56 np0005596060 podman[289389]: 2026-01-26 18:32:56.761036469 +0000 UTC m=+0.174454252 container died 2b2163aa1f07491a2c88c759b2fd86c8d817181af44bda50f1c63d64ae4cc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:32:56 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4502cacca4391b16e9c1819387b6866802537d622e1bf92f2dfeb7e9d1b84793-merged.mount: Deactivated successfully.
Jan 26 13:32:56 np0005596060 podman[289389]: 2026-01-26 18:32:56.801182919 +0000 UTC m=+0.214600702 container remove 2b2163aa1f07491a2c88c759b2fd86c8d817181af44bda50f1c63d64ae4cc13f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_volhard, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:32:56 np0005596060 systemd[1]: libpod-conmon-2b2163aa1f07491a2c88c759b2fd86c8d817181af44bda50f1c63d64ae4cc13f.scope: Deactivated successfully.
Jan 26 13:32:56 np0005596060 podman[289429]: 2026-01-26 18:32:56.980355138 +0000 UTC m=+0.039824343 container create f9c020a4d4caaa88292babd567e9e37dab5c021bfd7357f0511c18e38d7a4e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 13:32:57 np0005596060 systemd[1]: Started libpod-conmon-f9c020a4d4caaa88292babd567e9e37dab5c021bfd7357f0511c18e38d7a4e03.scope.
Jan 26 13:32:57 np0005596060 podman[289429]: 2026-01-26 18:32:56.964689164 +0000 UTC m=+0.024158389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:32:57 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:32:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6adad02d8d1f49b33db87b4a91bb41522124165aa8f31246cadd94accab50d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6adad02d8d1f49b33db87b4a91bb41522124165aa8f31246cadd94accab50d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6adad02d8d1f49b33db87b4a91bb41522124165aa8f31246cadd94accab50d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:57 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6adad02d8d1f49b33db87b4a91bb41522124165aa8f31246cadd94accab50d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:32:57 np0005596060 podman[289429]: 2026-01-26 18:32:57.090358627 +0000 UTC m=+0.149827852 container init f9c020a4d4caaa88292babd567e9e37dab5c021bfd7357f0511c18e38d7a4e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:32:57 np0005596060 podman[289429]: 2026-01-26 18:32:57.09881675 +0000 UTC m=+0.158285955 container start f9c020a4d4caaa88292babd567e9e37dab5c021bfd7357f0511c18e38d7a4e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 13:32:57 np0005596060 podman[289429]: 2026-01-26 18:32:57.101972529 +0000 UTC m=+0.161441734 container attach f9c020a4d4caaa88292babd567e9e37dab5c021bfd7357f0511c18e38d7a4e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_archimedes, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 13:32:57 np0005596060 nova_compute[247421]: 2026-01-26 18:32:57.137 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:57.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:57.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:57 np0005596060 nova_compute[247421]: 2026-01-26 18:32:57.401 247428 DEBUG nova.compute.manager [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:32:57 np0005596060 nova_compute[247421]: 2026-01-26 18:32:57.505 247428 DEBUG oslo_concurrency.lockutils [None req-4a2f9b03-42c7-4c32-8e49-96fd6719957b 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 18.066s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:57 np0005596060 nova_compute[247421]: 2026-01-26 18:32:57.506 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:57 np0005596060 nova_compute[247421]: 2026-01-26 18:32:57.506 247428 INFO nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:32:57 np0005596060 nova_compute[247421]: 2026-01-26 18:32:57.507 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:57 np0005596060 nova_compute[247421]: 2026-01-26 18:32:57.761 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:32:57 np0005596060 confident_archimedes[289445]: {
Jan 26 13:32:57 np0005596060 confident_archimedes[289445]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:32:57 np0005596060 confident_archimedes[289445]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:32:57 np0005596060 confident_archimedes[289445]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:32:57 np0005596060 confident_archimedes[289445]:        "osd_id": 1,
Jan 26 13:32:57 np0005596060 confident_archimedes[289445]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:32:57 np0005596060 confident_archimedes[289445]:        "type": "bluestore"
Jan 26 13:32:57 np0005596060 confident_archimedes[289445]:    }
Jan 26 13:32:57 np0005596060 confident_archimedes[289445]: }
Jan 26 13:32:57 np0005596060 systemd[1]: libpod-f9c020a4d4caaa88292babd567e9e37dab5c021bfd7357f0511c18e38d7a4e03.scope: Deactivated successfully.
Jan 26 13:32:57 np0005596060 podman[289429]: 2026-01-26 18:32:57.927938307 +0000 UTC m=+0.987407532 container died f9c020a4d4caaa88292babd567e9e37dab5c021bfd7357f0511c18e38d7a4e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_archimedes, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 13:32:57 np0005596060 systemd[1]: var-lib-containers-storage-overlay-f6adad02d8d1f49b33db87b4a91bb41522124165aa8f31246cadd94accab50d9-merged.mount: Deactivated successfully.
Jan 26 13:32:58 np0005596060 podman[289429]: 2026-01-26 18:32:58.206450856 +0000 UTC m=+1.265920101 container remove f9c020a4d4caaa88292babd567e9e37dab5c021bfd7357f0511c18e38d7a4e03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:32:58 np0005596060 systemd[1]: libpod-conmon-f9c020a4d4caaa88292babd567e9e37dab5c021bfd7357f0511c18e38d7a4e03.scope: Deactivated successfully.
Jan 26 13:32:58 np0005596060 nova_compute[247421]: 2026-01-26 18:32:58.222 247428 DEBUG nova.compute.manager [req-2ec754d7-e9a9-4b28-8290-5cf709238fd7 req-500fa2db-617a-4608-b012-5fabf29277a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:32:58 np0005596060 nova_compute[247421]: 2026-01-26 18:32:58.224 247428 DEBUG oslo_concurrency.lockutils [req-2ec754d7-e9a9-4b28-8290-5cf709238fd7 req-500fa2db-617a-4608-b012-5fabf29277a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:32:58 np0005596060 nova_compute[247421]: 2026-01-26 18:32:58.224 247428 DEBUG oslo_concurrency.lockutils [req-2ec754d7-e9a9-4b28-8290-5cf709238fd7 req-500fa2db-617a-4608-b012-5fabf29277a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:32:58 np0005596060 nova_compute[247421]: 2026-01-26 18:32:58.225 247428 DEBUG oslo_concurrency.lockutils [req-2ec754d7-e9a9-4b28-8290-5cf709238fd7 req-500fa2db-617a-4608-b012-5fabf29277a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:32:58 np0005596060 nova_compute[247421]: 2026-01-26 18:32:58.225 247428 DEBUG nova.compute.manager [req-2ec754d7-e9a9-4b28-8290-5cf709238fd7 req-500fa2db-617a-4608-b012-5fabf29277a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] No waiting events found dispatching network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:32:58 np0005596060 nova_compute[247421]: 2026-01-26 18:32:58.225 247428 WARNING nova.compute.manager [req-2ec754d7-e9a9-4b28-8290-5cf709238fd7 req-500fa2db-617a-4608-b012-5fabf29277a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received unexpected event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 for instance with vm_state active and task_state None.#033[00m
Jan 26 13:32:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:32:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:32:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:32:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:32:58 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5ea7474d-c50c-4751-baf5-a36a4a3f3c2b does not exist
Jan 26 13:32:58 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 258e887e-93bc-4a6a-a884-03c508c3ea4b does not exist
Jan 26 13:32:58 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 587b2f8d-d889-4014-bed3-fbb95fcf7cc9 does not exist
Jan 26 13:32:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 231 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 8.7 MiB/s rd, 6.4 MiB/s wr, 550 op/s
Jan 26 13:32:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:32:59 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:32:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:32:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:32:59.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:32:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:32:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:32:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:32:59.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:32:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:32:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1682399055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:32:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:32:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1682399055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:33:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 231 MiB data, 449 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 455 op/s
Jan 26 13:33:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Jan 26 13:33:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Jan 26 13:33:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Jan 26 13:33:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:01.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:01.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:02 np0005596060 nova_compute[247421]: 2026-01-26 18:33:02.141 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 172 MiB data, 413 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 23 KiB/s wr, 433 op/s
Jan 26 13:33:02 np0005596060 nova_compute[247421]: 2026-01-26 18:33:02.763 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:33:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:03.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:33:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:33:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:03.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002175409745486693 of space, bias 1.0, pg target 0.6526229236460078 quantized to 32 (current 32)
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.001323348513400335 of space, bias 1.0, pg target 0.39700455402010054 quantized to 32 (current 32)
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:33:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:33:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 121 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.1 KiB/s wr, 352 op/s
Jan 26 13:33:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:05.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:05.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:05 np0005596060 nova_compute[247421]: 2026-01-26 18:33:05.648 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Jan 26 13:33:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Jan 26 13:33:06 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 26 13:33:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 121 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.5 KiB/s wr, 68 op/s
Jan 26 13:33:07 np0005596060 nova_compute[247421]: 2026-01-26 18:33:07.143 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:07.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:07.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:07 np0005596060 nova_compute[247421]: 2026-01-26 18:33:07.766 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 121 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.5 KiB/s wr, 74 op/s
Jan 26 13:33:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:09.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:33:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:09.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:33:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:33:09Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:ab:0c 10.100.0.9
Jan 26 13:33:10 np0005596060 nova_compute[247421]: 2026-01-26 18:33:10.337 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 121 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1008 KiB/s rd, 2.1 KiB/s wr, 61 op/s
Jan 26 13:33:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:11.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:11.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:12 np0005596060 nova_compute[247421]: 2026-01-26 18:33:12.144 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 121 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 413 KiB/s rd, 14 KiB/s wr, 46 op/s
Jan 26 13:33:12 np0005596060 nova_compute[247421]: 2026-01-26 18:33:12.693 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:33:12 np0005596060 nova_compute[247421]: 2026-01-26 18:33:12.768 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:13.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:13.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:13 np0005596060 nova_compute[247421]: 2026-01-26 18:33:13.489 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:33:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:33:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:33:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:33:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:33:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:33:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 121 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 16 KiB/s wr, 56 op/s
Jan 26 13:33:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:14.761 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:33:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:14.762 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:33:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:14.762 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:33:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:15.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:33:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:15.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:33:15 np0005596060 podman[289589]: 2026-01-26 18:33:15.83411969 +0000 UTC m=+0.076351262 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 13:33:15 np0005596060 podman[289590]: 2026-01-26 18:33:15.859352385 +0000 UTC m=+0.101531276 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 13:33:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 121 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 26 KiB/s wr, 61 op/s
Jan 26 13:33:16 np0005596060 nova_compute[247421]: 2026-01-26 18:33:16.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:33:16 np0005596060 nova_compute[247421]: 2026-01-26 18:33:16.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:33:16 np0005596060 nova_compute[247421]: 2026-01-26 18:33:16.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:33:17 np0005596060 nova_compute[247421]: 2026-01-26 18:33:17.147 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:33:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:17.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:33:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:17.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:17 np0005596060 nova_compute[247421]: 2026-01-26 18:33:17.770 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 122 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 24 KiB/s wr, 54 op/s
Jan 26 13:33:18 np0005596060 nova_compute[247421]: 2026-01-26 18:33:18.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:33:18 np0005596060 nova_compute[247421]: 2026-01-26 18:33:18.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:33:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:19.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:19 np0005596060 nova_compute[247421]: 2026-01-26 18:33:19.359 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:33:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:19.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:33:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 122 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 23 KiB/s wr, 49 op/s
Jan 26 13:33:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:21.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:21.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:21 np0005596060 nova_compute[247421]: 2026-01-26 18:33:21.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:33:21 np0005596060 nova_compute[247421]: 2026-01-26 18:33:21.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:33:21 np0005596060 nova_compute[247421]: 2026-01-26 18:33:21.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:33:22 np0005596060 nova_compute[247421]: 2026-01-26 18:33:22.149 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:22 np0005596060 nova_compute[247421]: 2026-01-26 18:33:22.201 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:33:22 np0005596060 nova_compute[247421]: 2026-01-26 18:33:22.201 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquired lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:33:22 np0005596060 nova_compute[247421]: 2026-01-26 18:33:22.201 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 26 13:33:22 np0005596060 nova_compute[247421]: 2026-01-26 18:33:22.201 247428 DEBUG nova.objects.instance [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:33:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 139 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 703 KiB/s wr, 67 op/s
Jan 26 13:33:22 np0005596060 nova_compute[247421]: 2026-01-26 18:33:22.772 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:22 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:22.945 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:33:22 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:22.946 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:33:22 np0005596060 nova_compute[247421]: 2026-01-26 18:33:22.946 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:33:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:23.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:33:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:33:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:23.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:33:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 192 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 66 op/s
Jan 26 13:33:24 np0005596060 nova_compute[247421]: 2026-01-26 18:33:24.934 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:33:24 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:24.949 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.047 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Releasing lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.047 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.048 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.048 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.048 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.077 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.078 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.078 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.078 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.078 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:33:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:25.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:25.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:33:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297967840' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.547 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.636 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.636 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.790 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.791 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4430MB free_disk=20.93320083618164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.791 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.791 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.905 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance ebdb1528-b5f5-4593-8801-7a25fc358497 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.905 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.905 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:33:25 np0005596060 nova_compute[247421]: 2026-01-26 18:33:25.953 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:33:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:33:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2728119446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:33:26 np0005596060 nova_compute[247421]: 2026-01-26 18:33:26.398 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:33:26 np0005596060 nova_compute[247421]: 2026-01-26 18:33:26.404 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:33:26 np0005596060 nova_compute[247421]: 2026-01-26 18:33:26.423 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:33:26 np0005596060 nova_compute[247421]: 2026-01-26 18:33:26.455 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:33:26 np0005596060 nova_compute[247421]: 2026-01-26 18:33:26.456 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:33:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 215 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 66 op/s
Jan 26 13:33:27 np0005596060 nova_compute[247421]: 2026-01-26 18:33:27.152 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:27.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:27.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:27 np0005596060 nova_compute[247421]: 2026-01-26 18:33:27.774 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 215 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 3.6 MiB/s wr, 62 op/s
Jan 26 13:33:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:33:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1128064177' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:33:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:33:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1128064177' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:33:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:29.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:29.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 215 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Jan 26 13:33:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:31.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:31.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:32 np0005596060 nova_compute[247421]: 2026-01-26 18:33:32.154 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 198 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.6 MiB/s wr, 116 op/s
Jan 26 13:33:32 np0005596060 nova_compute[247421]: 2026-01-26 18:33:32.844 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.137 247428 DEBUG oslo_concurrency.lockutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.138 247428 DEBUG oslo_concurrency.lockutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.138 247428 DEBUG oslo_concurrency.lockutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.139 247428 DEBUG oslo_concurrency.lockutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.139 247428 DEBUG oslo_concurrency.lockutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.140 247428 INFO nova.compute.manager [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Terminating instance#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.141 247428 DEBUG nova.compute.manager [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:33:33 np0005596060 kernel: tapca62000c-90 (unregistering): left promiscuous mode
Jan 26 13:33:33 np0005596060 NetworkManager[48900]: <info>  [1769452413.2005] device (tapca62000c-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.208 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:33 np0005596060 ovn_controller[148842]: 2026-01-26T18:33:33Z|00135|binding|INFO|Releasing lport ca62000c-903a-41ab-abeb-c6427e62fa46 from this chassis (sb_readonly=0)
Jan 26 13:33:33 np0005596060 ovn_controller[148842]: 2026-01-26T18:33:33Z|00136|binding|INFO|Setting lport ca62000c-903a-41ab-abeb-c6427e62fa46 down in Southbound
Jan 26 13:33:33 np0005596060 ovn_controller[148842]: 2026-01-26T18:33:33Z|00137|binding|INFO|Removing iface tapca62000c-90 ovn-installed in OVS
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.210 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.220 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:ab:0c 10.100.0.9'], port_security=['fa:16:3e:8b:ab:0c 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ebdb1528-b5f5-4593-8801-7a25fc358497', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de54f204-706b-4f67-80ee-0be6151f732b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd2387917610d4d928d60d38ade9e3305', 'neutron:revision_number': '9', 'neutron:security_group_ids': '1cf612df-2e43-4b29-bdb2-6253f8c086ab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3f66771a-4d2d-438c-ad16-4a45d6686a0f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=ca62000c-903a-41ab-abeb-c6427e62fa46) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.221 159331 INFO neutron.agent.ovn.metadata.agent [-] Port ca62000c-903a-41ab-abeb-c6427e62fa46 in datapath de54f204-706b-4f67-80ee-0be6151f732b unbound from our chassis#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.223 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de54f204-706b-4f67-80ee-0be6151f732b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.224 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[32d33dbe-75b5-4034-85a9-692dbe60bb63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.225 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b namespace which is not needed anymore#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.229 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:33 np0005596060 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 26 13:33:33 np0005596060 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000016.scope: Consumed 15.352s CPU time.
Jan 26 13:33:33 np0005596060 systemd-machined[213879]: Machine qemu-11-instance-00000016 terminated.
Jan 26 13:33:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:33.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:33 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[289213]: [NOTICE]   (289217) : haproxy version is 2.8.14-c23fe91
Jan 26 13:33:33 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[289213]: [NOTICE]   (289217) : path to executable is /usr/sbin/haproxy
Jan 26 13:33:33 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[289213]: [WARNING]  (289217) : Exiting Master process...
Jan 26 13:33:33 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[289213]: [ALERT]    (289217) : Current worker (289219) exited with code 143 (Terminated)
Jan 26 13:33:33 np0005596060 neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b[289213]: [WARNING]  (289217) : All workers exited. Exiting... (0)
Jan 26 13:33:33 np0005596060 systemd[1]: libpod-bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3.scope: Deactivated successfully.
Jan 26 13:33:33 np0005596060 podman[289766]: 2026-01-26 18:33:33.364102826 +0000 UTC m=+0.054283718 container died bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.384 247428 INFO nova.virt.libvirt.driver [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Instance destroyed successfully.#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.385 247428 DEBUG nova.objects.instance [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lazy-loading 'resources' on Instance uuid ebdb1528-b5f5-4593-8801-7a25fc358497 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:33:33 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3-userdata-shm.mount: Deactivated successfully.
Jan 26 13:33:33 np0005596060 systemd[1]: var-lib-containers-storage-overlay-87dba1549cd837a2b92d197a634cd29741287597e00889bf733b738f2aa3dcd8-merged.mount: Deactivated successfully.
Jan 26 13:33:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:33.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:33 np0005596060 podman[289766]: 2026-01-26 18:33:33.406697048 +0000 UTC m=+0.096877910 container cleanup bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.409 247428 DEBUG nova.virt.libvirt.vif [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T18:31:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-1465409842',display_name='tempest-TestShelveInstance-server-1465409842',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-1465409842',id=22,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBArPa0GPQW3updI5wEeWfHenCcjGPGWD88434ubT+vOQr3X0Eo9eIdeVp23Kl758az+2Tg1EnoD3gvKGqOjgjRSe43W1eqMdMcY+qIEIlduzaNHNym4w1xAu5VTrRKiBeQ==',key_name='tempest-TestShelveInstance-1450425907',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:32:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d2387917610d4d928d60d38ade9e3305',ramdisk_id='',reservation_id='r-b1yl1dsf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1084421254',owner_user_name='tempest-TestShelveInstance-1084421254-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:32:57Z,user_data=None,user_id='6dd15a25d55a4c818b4f121ca4c79ac7',uuid=ebdb1528-b5f5-4593-8801-7a25fc358497,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.410 247428 DEBUG nova.network.os_vif_util [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converting VIF {"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.410 247428 DEBUG nova.network.os_vif_util [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.411 247428 DEBUG os_vif [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:33:33 np0005596060 systemd[1]: libpod-conmon-bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3.scope: Deactivated successfully.
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.412 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.413 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca62000c-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.416 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.419 247428 INFO os_vif [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8b:ab:0c,bridge_name='br-int',has_traffic_filtering=True,id=ca62000c-903a-41ab-abeb-c6427e62fa46,network=Network(de54f204-706b-4f67-80ee-0be6151f732b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapca62000c-90')#033[00m
Jan 26 13:33:33 np0005596060 podman[289803]: 2026-01-26 18:33:33.469829727 +0000 UTC m=+0.040042209 container remove bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.476 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[848829cc-0084-431b-a244-d4c3cd04fea3]: (4, ('Mon Jan 26 06:33:33 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b (bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3)\nbbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3\nMon Jan 26 06:33:33 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b (bbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3)\nbbc26fdb3458a61903794a00eb0833ff8f40f4b0a14080556be45ca188a9e5b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.477 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec56990-e69c-46bc-bf6c-11bd6231359a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.478 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde54f204-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.479 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:33 np0005596060 kernel: tapde54f204-70: left promiscuous mode
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.494 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.496 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb1e956-593f-4cc2-916e-8221899bbc29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.511 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8fd4ac-f663-4733-8774-50a9acb08e0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.513 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad4bf48-899e-4ad5-9b12-811d98934418]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.531 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f9e79b-8724-4fd8-93c1-3d8b4834695e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 613946, 'reachable_time': 17028, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289835, 'error': None, 'target': 'ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:33:33 np0005596060 systemd[1]: run-netns-ovnmeta\x2dde54f204\x2d706b\x2d4f67\x2d80ee\x2d0be6151f732b.mount: Deactivated successfully.
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.536 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-de54f204-706b-4f67-80ee-0be6151f732b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:33:33 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:33:33.536 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[d8dc641c-5c47-4935-9dcd-082f71d5eecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.872 247428 INFO nova.virt.libvirt.driver [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Deleting instance files /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497_del#033[00m
Jan 26 13:33:33 np0005596060 nova_compute[247421]: 2026-01-26 18:33:33.874 247428 INFO nova.virt.libvirt.driver [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Deletion of /var/lib/nova/instances/ebdb1528-b5f5-4593-8801-7a25fc358497_del complete#033[00m
Jan 26 13:33:34 np0005596060 nova_compute[247421]: 2026-01-26 18:33:34.000 247428 INFO nova.compute.manager [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:33:34 np0005596060 nova_compute[247421]: 2026-01-26 18:33:34.001 247428 DEBUG oslo.service.loopingcall [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:33:34 np0005596060 nova_compute[247421]: 2026-01-26 18:33:34.001 247428 DEBUG nova.compute.manager [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:33:34 np0005596060 nova_compute[247421]: 2026-01-26 18:33:34.002 247428 DEBUG nova.network.neutron [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:33:34 np0005596060 nova_compute[247421]: 2026-01-26 18:33:34.399 247428 DEBUG nova.compute.manager [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-changed-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:33:34 np0005596060 nova_compute[247421]: 2026-01-26 18:33:34.400 247428 DEBUG nova.compute.manager [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Refreshing instance network info cache due to event network-changed-ca62000c-903a-41ab-abeb-c6427e62fa46. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:33:34 np0005596060 nova_compute[247421]: 2026-01-26 18:33:34.401 247428 DEBUG oslo_concurrency.lockutils [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:33:34 np0005596060 nova_compute[247421]: 2026-01-26 18:33:34.401 247428 DEBUG oslo_concurrency.lockutils [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:33:34 np0005596060 nova_compute[247421]: 2026-01-26 18:33:34.401 247428 DEBUG nova.network.neutron [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Refreshing network info cache for port ca62000c-903a-41ab-abeb-c6427e62fa46 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:33:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 169 MiB data, 411 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 138 op/s
Jan 26 13:33:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:35.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:35.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:35 np0005596060 nova_compute[247421]: 2026-01-26 18:33:35.694 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:35 np0005596060 nova_compute[247421]: 2026-01-26 18:33:35.996 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.235 247428 DEBUG nova.network.neutron [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.333 247428 INFO nova.compute.manager [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Took 2.33 seconds to deallocate network for instance.#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.469 247428 DEBUG oslo_concurrency.lockutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.470 247428 DEBUG oslo_concurrency.lockutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.549 247428 DEBUG oslo_concurrency.processutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:33:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 144 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 1014 KiB/s wr, 128 op/s
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.694 247428 DEBUG nova.compute.manager [req-0b16da7d-5335-4737-be91-5e72d478b694 req-cba37b77-6fcc-4900-99ef-918f82c143ea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.695 247428 DEBUG oslo_concurrency.lockutils [req-0b16da7d-5335-4737-be91-5e72d478b694 req-cba37b77-6fcc-4900-99ef-918f82c143ea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.695 247428 DEBUG oslo_concurrency.lockutils [req-0b16da7d-5335-4737-be91-5e72d478b694 req-cba37b77-6fcc-4900-99ef-918f82c143ea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.695 247428 DEBUG oslo_concurrency.lockutils [req-0b16da7d-5335-4737-be91-5e72d478b694 req-cba37b77-6fcc-4900-99ef-918f82c143ea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.695 247428 DEBUG nova.compute.manager [req-0b16da7d-5335-4737-be91-5e72d478b694 req-cba37b77-6fcc-4900-99ef-918f82c143ea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] No waiting events found dispatching network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.696 247428 WARNING nova.compute.manager [req-0b16da7d-5335-4737-be91-5e72d478b694 req-cba37b77-6fcc-4900-99ef-918f82c143ea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received unexpected event network-vif-plugged-ca62000c-903a-41ab-abeb-c6427e62fa46 for instance with vm_state deleted and task_state None.#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.696 247428 DEBUG nova.compute.manager [req-0b16da7d-5335-4737-be91-5e72d478b694 req-cba37b77-6fcc-4900-99ef-918f82c143ea 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-vif-deleted-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:33:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:33:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3484054941' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.976 247428 DEBUG oslo_concurrency.processutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:33:36 np0005596060 nova_compute[247421]: 2026-01-26 18:33:36.982 247428 DEBUG nova.compute.provider_tree [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:33:37 np0005596060 nova_compute[247421]: 2026-01-26 18:33:37.001 247428 DEBUG nova.scheduler.client.report [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:33:37 np0005596060 nova_compute[247421]: 2026-01-26 18:33:37.044 247428 DEBUG oslo_concurrency.lockutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:33:37 np0005596060 nova_compute[247421]: 2026-01-26 18:33:37.157 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 26 13:33:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:37.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 26 13:33:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:37.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:37 np0005596060 nova_compute[247421]: 2026-01-26 18:33:37.408 247428 INFO nova.scheduler.client.report [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Deleted allocations for instance ebdb1528-b5f5-4593-8801-7a25fc358497#033[00m
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.331 247428 DEBUG oslo_concurrency.lockutils [None req-33c29a9c-3ba8-4cec-b0d7-674b99d3c916 6dd15a25d55a4c818b4f121ca4c79ac7 d2387917610d4d928d60d38ade9e3305 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.450 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 20 KiB/s wr, 123 op/s
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.668 247428 DEBUG nova.network.neutron [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updated VIF entry in instance network info cache for port ca62000c-903a-41ab-abeb-c6427e62fa46. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.669 247428 DEBUG nova.network.neutron [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Updating instance_info_cache with network_info: [{"id": "ca62000c-903a-41ab-abeb-c6427e62fa46", "address": "fa:16:3e:8b:ab:0c", "network": {"id": "de54f204-706b-4f67-80ee-0be6151f732b", "bridge": "br-int", "label": "tempest-TestShelveInstance-47815065-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d2387917610d4d928d60d38ade9e3305", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapca62000c-90", "ovs_interfaceid": "ca62000c-903a-41ab-abeb-c6427e62fa46", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.817 247428 DEBUG oslo_concurrency.lockutils [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-ebdb1528-b5f5-4593-8801-7a25fc358497" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.818 247428 DEBUG nova.compute.manager [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-vif-unplugged-ca62000c-903a-41ab-abeb-c6427e62fa46 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.818 247428 DEBUG oslo_concurrency.lockutils [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.818 247428 DEBUG oslo_concurrency.lockutils [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.818 247428 DEBUG oslo_concurrency.lockutils [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "ebdb1528-b5f5-4593-8801-7a25fc358497-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.819 247428 DEBUG nova.compute.manager [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] No waiting events found dispatching network-vif-unplugged-ca62000c-903a-41ab-abeb-c6427e62fa46 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:33:38 np0005596060 nova_compute[247421]: 2026-01-26 18:33:38.819 247428 DEBUG nova.compute.manager [req-eb87be03-547d-468f-b3da-cce07b23ee62 req-8754b957-3d74-4660-ac27-88c0d9619930 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Received event network-vif-unplugged-ca62000c-903a-41ab-abeb-c6427e62fa46 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:33:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:39.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:39.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:33:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3691339361' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:33:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:33:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3691339361' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:33:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 123 op/s
Jan 26 13:33:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:41.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:42 np0005596060 nova_compute[247421]: 2026-01-26 18:33:42.160 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 88 MiB data, 363 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 123 op/s
Jan 26 13:33:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:43.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:43.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:43 np0005596060 nova_compute[247421]: 2026-01-26 18:33:43.453 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:44 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:33:44
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'vms', 'default.rgw.meta', 'volumes']
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 95 MiB data, 363 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 511 KiB/s wr, 80 op/s
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:33:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:33:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:45.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:45.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 103 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 174 KiB/s rd, 1.2 MiB/s wr, 55 op/s
Jan 26 13:33:46 np0005596060 podman[289917]: 2026-01-26 18:33:46.813496402 +0000 UTC m=+0.078868285 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 26 13:33:46 np0005596060 podman[289918]: 2026-01-26 18:33:46.847705054 +0000 UTC m=+0.112266527 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:33:47 np0005596060 nova_compute[247421]: 2026-01-26 18:33:47.161 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:47.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:47.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:48 np0005596060 nova_compute[247421]: 2026-01-26 18:33:48.382 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769452413.3804526, ebdb1528-b5f5-4593-8801-7a25fc358497 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:33:48 np0005596060 nova_compute[247421]: 2026-01-26 18:33:48.382 247428 INFO nova.compute.manager [-] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:33:48 np0005596060 nova_compute[247421]: 2026-01-26 18:33:48.456 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:48 np0005596060 nova_compute[247421]: 2026-01-26 18:33:48.505 247428 DEBUG nova.compute.manager [None req-4454c4a7-ba9b-4dda-9b17-c402e4c91964 - - - - - -] [instance: ebdb1528-b5f5-4593-8801-7a25fc358497] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:33:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 348 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 26 13:33:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:49.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:49.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 344 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:33:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:33:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:51.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:33:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:51.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:52 np0005596060 nova_compute[247421]: 2026-01-26 18:33:52.164 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 26 13:33:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:53.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:33:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:53.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:33:53 np0005596060 nova_compute[247421]: 2026-01-26 18:33:53.507 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 26 13:33:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:55.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:55.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:33:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.7 MiB/s wr, 60 op/s
Jan 26 13:33:57 np0005596060 nova_compute[247421]: 2026-01-26 18:33:57.168 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:57.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:33:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:57.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:33:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 149 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.4 MiB/s wr, 75 op/s
Jan 26 13:33:58 np0005596060 nova_compute[247421]: 2026-01-26 18:33:58.759 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:33:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:33:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:33:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:33:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:33:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:33:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:33:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:33:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:33:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:33:59.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:33:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:33:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:33:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:33:59.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:34:00 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 61ba0536-a59f-4f67-836b-558eaccb29f5 does not exist
Jan 26 13:34:00 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 71888192-f31f-4979-9e7e-5c8696be6878 does not exist
Jan 26 13:34:00 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 1f00fbad-f7cd-44aa-bedf-c53a596141ca does not exist
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:34:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:34:00.458 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:34:00 np0005596060 nova_compute[247421]: 2026-01-26 18:34:00.458 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:00 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:34:00.460 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:34:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 149 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.5 MiB/s wr, 39 op/s
Jan 26 13:34:00 np0005596060 podman[290407]: 2026-01-26 18:34:00.726455217 +0000 UTC m=+0.046337567 container create 3dbda3981dc71506f9a1b85cbb2a375ede3dd9e61f0db2e34cd5a679f00c41ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_williamson, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:34:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:34:00 np0005596060 systemd[1]: Started libpod-conmon-3dbda3981dc71506f9a1b85cbb2a375ede3dd9e61f0db2e34cd5a679f00c41ed.scope.
Jan 26 13:34:00 np0005596060 podman[290407]: 2026-01-26 18:34:00.704869714 +0000 UTC m=+0.024752064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:34:00 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:34:00 np0005596060 podman[290407]: 2026-01-26 18:34:00.835754058 +0000 UTC m=+0.155636418 container init 3dbda3981dc71506f9a1b85cbb2a375ede3dd9e61f0db2e34cd5a679f00c41ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_williamson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 13:34:00 np0005596060 podman[290407]: 2026-01-26 18:34:00.849071643 +0000 UTC m=+0.168953983 container start 3dbda3981dc71506f9a1b85cbb2a375ede3dd9e61f0db2e34cd5a679f00c41ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_williamson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:34:00 np0005596060 podman[290407]: 2026-01-26 18:34:00.853255498 +0000 UTC m=+0.173137858 container attach 3dbda3981dc71506f9a1b85cbb2a375ede3dd9e61f0db2e34cd5a679f00c41ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:34:00 np0005596060 gracious_williamson[290423]: 167 167
Jan 26 13:34:00 np0005596060 systemd[1]: libpod-3dbda3981dc71506f9a1b85cbb2a375ede3dd9e61f0db2e34cd5a679f00c41ed.scope: Deactivated successfully.
Jan 26 13:34:00 np0005596060 conmon[290423]: conmon 3dbda3981dc71506f9a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3dbda3981dc71506f9a1b85cbb2a375ede3dd9e61f0db2e34cd5a679f00c41ed.scope/container/memory.events
Jan 26 13:34:00 np0005596060 podman[290407]: 2026-01-26 18:34:00.860630574 +0000 UTC m=+0.180512914 container died 3dbda3981dc71506f9a1b85cbb2a375ede3dd9e61f0db2e34cd5a679f00c41ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:34:00 np0005596060 systemd[1]: var-lib-containers-storage-overlay-73abe23369a6d278e0fcfd65ebf40e007fce4a9fee0980d9612de48bba4af05f-merged.mount: Deactivated successfully.
Jan 26 13:34:00 np0005596060 podman[290407]: 2026-01-26 18:34:00.899857671 +0000 UTC m=+0.219740011 container remove 3dbda3981dc71506f9a1b85cbb2a375ede3dd9e61f0db2e34cd5a679f00c41ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:34:00 np0005596060 systemd[1]: libpod-conmon-3dbda3981dc71506f9a1b85cbb2a375ede3dd9e61f0db2e34cd5a679f00c41ed.scope: Deactivated successfully.
Jan 26 13:34:01 np0005596060 podman[290445]: 2026-01-26 18:34:01.092801277 +0000 UTC m=+0.044735607 container create 358b7137e39616d73af8c4d9c51e8a1dedf01face8d6a2c09b392063040e2c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:34:01 np0005596060 systemd[1]: Started libpod-conmon-358b7137e39616d73af8c4d9c51e8a1dedf01face8d6a2c09b392063040e2c62.scope.
Jan 26 13:34:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:34:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c95da7b29c0d5cf1d348305f54b919c232a8111ed3155fcdf22b93000b2487/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c95da7b29c0d5cf1d348305f54b919c232a8111ed3155fcdf22b93000b2487/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c95da7b29c0d5cf1d348305f54b919c232a8111ed3155fcdf22b93000b2487/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c95da7b29c0d5cf1d348305f54b919c232a8111ed3155fcdf22b93000b2487/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c95da7b29c0d5cf1d348305f54b919c232a8111ed3155fcdf22b93000b2487/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:01 np0005596060 podman[290445]: 2026-01-26 18:34:01.070884126 +0000 UTC m=+0.022818486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:34:01 np0005596060 podman[290445]: 2026-01-26 18:34:01.176313359 +0000 UTC m=+0.128247709 container init 358b7137e39616d73af8c4d9c51e8a1dedf01face8d6a2c09b392063040e2c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:34:01 np0005596060 podman[290445]: 2026-01-26 18:34:01.182831933 +0000 UTC m=+0.134766263 container start 358b7137e39616d73af8c4d9c51e8a1dedf01face8d6a2c09b392063040e2c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:34:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:01 np0005596060 podman[290445]: 2026-01-26 18:34:01.188562927 +0000 UTC m=+0.140497277 container attach 358b7137e39616d73af8c4d9c51e8a1dedf01face8d6a2c09b392063040e2c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 13:34:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:34:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:01.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:34:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:01.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:02 np0005596060 wizardly_jepsen[290461]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:34:02 np0005596060 wizardly_jepsen[290461]: --> relative data size: 1.0
Jan 26 13:34:02 np0005596060 wizardly_jepsen[290461]: --> All data devices are unavailable
Jan 26 13:34:02 np0005596060 systemd[1]: libpod-358b7137e39616d73af8c4d9c51e8a1dedf01face8d6a2c09b392063040e2c62.scope: Deactivated successfully.
Jan 26 13:34:02 np0005596060 podman[290445]: 2026-01-26 18:34:02.034162009 +0000 UTC m=+0.986096369 container died 358b7137e39616d73af8c4d9c51e8a1dedf01face8d6a2c09b392063040e2c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 13:34:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-72c95da7b29c0d5cf1d348305f54b919c232a8111ed3155fcdf22b93000b2487-merged.mount: Deactivated successfully.
Jan 26 13:34:02 np0005596060 podman[290445]: 2026-01-26 18:34:02.107058854 +0000 UTC m=+1.058993184 container remove 358b7137e39616d73af8c4d9c51e8a1dedf01face8d6a2c09b392063040e2c62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:34:02 np0005596060 systemd[1]: libpod-conmon-358b7137e39616d73af8c4d9c51e8a1dedf01face8d6a2c09b392063040e2c62.scope: Deactivated successfully.
Jan 26 13:34:02 np0005596060 nova_compute[247421]: 2026-01-26 18:34:02.170 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 167 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 26 13:34:02 np0005596060 podman[290631]: 2026-01-26 18:34:02.765401372 +0000 UTC m=+0.053909868 container create 401c424943f6632060dcc798362fbc1a6006c21e3d7ab2186d910fbe89cebb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:34:02 np0005596060 systemd[1]: Started libpod-conmon-401c424943f6632060dcc798362fbc1a6006c21e3d7ab2186d910fbe89cebb78.scope.
Jan 26 13:34:02 np0005596060 podman[290631]: 2026-01-26 18:34:02.735563341 +0000 UTC m=+0.024071857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:34:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:34:02 np0005596060 podman[290631]: 2026-01-26 18:34:02.856849683 +0000 UTC m=+0.145358179 container init 401c424943f6632060dcc798362fbc1a6006c21e3d7ab2186d910fbe89cebb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:34:02 np0005596060 podman[290631]: 2026-01-26 18:34:02.866104916 +0000 UTC m=+0.154613392 container start 401c424943f6632060dcc798362fbc1a6006c21e3d7ab2186d910fbe89cebb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 13:34:02 np0005596060 podman[290631]: 2026-01-26 18:34:02.870048515 +0000 UTC m=+0.158557011 container attach 401c424943f6632060dcc798362fbc1a6006c21e3d7ab2186d910fbe89cebb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 13:34:02 np0005596060 lucid_galois[290648]: 167 167
Jan 26 13:34:02 np0005596060 systemd[1]: libpod-401c424943f6632060dcc798362fbc1a6006c21e3d7ab2186d910fbe89cebb78.scope: Deactivated successfully.
Jan 26 13:34:02 np0005596060 conmon[290648]: conmon 401c424943f6632060dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-401c424943f6632060dcc798362fbc1a6006c21e3d7ab2186d910fbe89cebb78.scope/container/memory.events
Jan 26 13:34:02 np0005596060 podman[290631]: 2026-01-26 18:34:02.875273447 +0000 UTC m=+0.163781933 container died 401c424943f6632060dcc798362fbc1a6006c21e3d7ab2186d910fbe89cebb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:34:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-40ee24e2fc6607c1d1a8800238e9e285fd4e56e618f2f56f7f89ebececb3fb9b-merged.mount: Deactivated successfully.
Jan 26 13:34:02 np0005596060 podman[290631]: 2026-01-26 18:34:02.912488774 +0000 UTC m=+0.200997250 container remove 401c424943f6632060dcc798362fbc1a6006c21e3d7ab2186d910fbe89cebb78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_galois, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:34:02 np0005596060 systemd[1]: libpod-conmon-401c424943f6632060dcc798362fbc1a6006c21e3d7ab2186d910fbe89cebb78.scope: Deactivated successfully.
Jan 26 13:34:03 np0005596060 podman[290671]: 2026-01-26 18:34:03.069570327 +0000 UTC m=+0.044305916 container create 82e21a7355ec1159e8be0f0dda144933203160b15716f005332be99b5970bb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:34:03 np0005596060 systemd[1]: Started libpod-conmon-82e21a7355ec1159e8be0f0dda144933203160b15716f005332be99b5970bb9a.scope.
Jan 26 13:34:03 np0005596060 podman[290671]: 2026-01-26 18:34:03.049706987 +0000 UTC m=+0.024442596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:34:03 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:34:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c63ac68bfc851ed2a98134653e6c2e2ff6c84361a070e3276c533164c81ec3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c63ac68bfc851ed2a98134653e6c2e2ff6c84361a070e3276c533164c81ec3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c63ac68bfc851ed2a98134653e6c2e2ff6c84361a070e3276c533164c81ec3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:03 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c63ac68bfc851ed2a98134653e6c2e2ff6c84361a070e3276c533164c81ec3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:03 np0005596060 podman[290671]: 2026-01-26 18:34:03.168475436 +0000 UTC m=+0.143211035 container init 82e21a7355ec1159e8be0f0dda144933203160b15716f005332be99b5970bb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:34:03 np0005596060 podman[290671]: 2026-01-26 18:34:03.175836791 +0000 UTC m=+0.150572380 container start 82e21a7355ec1159e8be0f0dda144933203160b15716f005332be99b5970bb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 13:34:03 np0005596060 podman[290671]: 2026-01-26 18:34:03.178844587 +0000 UTC m=+0.153580176 container attach 82e21a7355ec1159e8be0f0dda144933203160b15716f005332be99b5970bb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 13:34:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:03.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:03.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:03 np0005596060 nova_compute[247421]: 2026-01-26 18:34:03.761 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]: {
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:    "1": [
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:        {
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "devices": [
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "/dev/loop3"
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            ],
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "lv_name": "ceph_lv0",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "lv_size": "7511998464",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "name": "ceph_lv0",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "tags": {
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.cluster_name": "ceph",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.crush_device_class": "",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.encrypted": "0",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.osd_id": "1",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.type": "block",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:                "ceph.vdo": "0"
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            },
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "type": "block",
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:            "vg_name": "ceph_vg0"
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:        }
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]:    ]
Jan 26 13:34:03 np0005596060 stupefied_mclaren[290687]: }
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021703206425646754 of space, bias 1.0, pg target 0.6510961927694027 quantized to 32 (current 32)
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:34:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:34:03 np0005596060 systemd[1]: libpod-82e21a7355ec1159e8be0f0dda144933203160b15716f005332be99b5970bb9a.scope: Deactivated successfully.
Jan 26 13:34:03 np0005596060 conmon[290687]: conmon 82e21a7355ec1159e8be <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82e21a7355ec1159e8be0f0dda144933203160b15716f005332be99b5970bb9a.scope/container/memory.events
Jan 26 13:34:03 np0005596060 podman[290671]: 2026-01-26 18:34:03.962129971 +0000 UTC m=+0.936865560 container died 82e21a7355ec1159e8be0f0dda144933203160b15716f005332be99b5970bb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:34:03 np0005596060 systemd[1]: var-lib-containers-storage-overlay-9c63ac68bfc851ed2a98134653e6c2e2ff6c84361a070e3276c533164c81ec3e-merged.mount: Deactivated successfully.
Jan 26 13:34:04 np0005596060 podman[290671]: 2026-01-26 18:34:04.018062448 +0000 UTC m=+0.992798037 container remove 82e21a7355ec1159e8be0f0dda144933203160b15716f005332be99b5970bb9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_mclaren, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:34:04 np0005596060 systemd[1]: libpod-conmon-82e21a7355ec1159e8be0f0dda144933203160b15716f005332be99b5970bb9a.scope: Deactivated successfully.
Jan 26 13:34:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 167 MiB data, 428 MiB used, 21 GiB / 21 GiB avail; 401 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 26 13:34:04 np0005596060 podman[290848]: 2026-01-26 18:34:04.710077105 +0000 UTC m=+0.043357132 container create 5d0bf19a20ecaca3db5fd7f594f646d77606cc53de29da6fa6fb2548227eb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 13:34:04 np0005596060 systemd[1]: Started libpod-conmon-5d0bf19a20ecaca3db5fd7f594f646d77606cc53de29da6fa6fb2548227eb8e3.scope.
Jan 26 13:34:04 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:34:04 np0005596060 podman[290848]: 2026-01-26 18:34:04.689642741 +0000 UTC m=+0.022922788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:34:04 np0005596060 podman[290848]: 2026-01-26 18:34:04.80128832 +0000 UTC m=+0.134568397 container init 5d0bf19a20ecaca3db5fd7f594f646d77606cc53de29da6fa6fb2548227eb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:34:04 np0005596060 podman[290848]: 2026-01-26 18:34:04.810573724 +0000 UTC m=+0.143853751 container start 5d0bf19a20ecaca3db5fd7f594f646d77606cc53de29da6fa6fb2548227eb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:34:04 np0005596060 podman[290848]: 2026-01-26 18:34:04.814966635 +0000 UTC m=+0.148246692 container attach 5d0bf19a20ecaca3db5fd7f594f646d77606cc53de29da6fa6fb2548227eb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:34:04 np0005596060 lucid_shannon[290864]: 167 167
Jan 26 13:34:04 np0005596060 systemd[1]: libpod-5d0bf19a20ecaca3db5fd7f594f646d77606cc53de29da6fa6fb2548227eb8e3.scope: Deactivated successfully.
Jan 26 13:34:04 np0005596060 podman[290848]: 2026-01-26 18:34:04.818439132 +0000 UTC m=+0.151719199 container died 5d0bf19a20ecaca3db5fd7f594f646d77606cc53de29da6fa6fb2548227eb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shannon, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 13:34:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-29b3b69eb7689bf5854736cb6ca26348d765e261d3345fef50c5c46549faa4be-merged.mount: Deactivated successfully.
Jan 26 13:34:04 np0005596060 podman[290848]: 2026-01-26 18:34:04.86323956 +0000 UTC m=+0.196519587 container remove 5d0bf19a20ecaca3db5fd7f594f646d77606cc53de29da6fa6fb2548227eb8e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:34:04 np0005596060 systemd[1]: libpod-conmon-5d0bf19a20ecaca3db5fd7f594f646d77606cc53de29da6fa6fb2548227eb8e3.scope: Deactivated successfully.
Jan 26 13:34:05 np0005596060 podman[290887]: 2026-01-26 18:34:05.053703653 +0000 UTC m=+0.050944943 container create 602b77f3c62538fea7a8a29a2ab50e491c10bac0b87867737aa26a90e98d4818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 13:34:05 np0005596060 systemd[1]: Started libpod-conmon-602b77f3c62538fea7a8a29a2ab50e491c10bac0b87867737aa26a90e98d4818.scope.
Jan 26 13:34:05 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:34:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d086f907b14ef0ac2d9acedf4ceeefc125e6b5eaad43dd87ac6ac6fbb4a64d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d086f907b14ef0ac2d9acedf4ceeefc125e6b5eaad43dd87ac6ac6fbb4a64d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d086f907b14ef0ac2d9acedf4ceeefc125e6b5eaad43dd87ac6ac6fbb4a64d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d086f907b14ef0ac2d9acedf4ceeefc125e6b5eaad43dd87ac6ac6fbb4a64d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:34:05 np0005596060 podman[290887]: 2026-01-26 18:34:05.032577641 +0000 UTC m=+0.029818951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:34:05 np0005596060 podman[290887]: 2026-01-26 18:34:05.138399025 +0000 UTC m=+0.135640345 container init 602b77f3c62538fea7a8a29a2ab50e491c10bac0b87867737aa26a90e98d4818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 26 13:34:05 np0005596060 podman[290887]: 2026-01-26 18:34:05.144635152 +0000 UTC m=+0.141876442 container start 602b77f3c62538fea7a8a29a2ab50e491c10bac0b87867737aa26a90e98d4818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:34:05 np0005596060 podman[290887]: 2026-01-26 18:34:05.148569581 +0000 UTC m=+0.145810871 container attach 602b77f3c62538fea7a8a29a2ab50e491c10bac0b87867737aa26a90e98d4818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 26 13:34:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:05.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:06 np0005596060 elegant_williamson[290903]: {
Jan 26 13:34:06 np0005596060 elegant_williamson[290903]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:34:06 np0005596060 elegant_williamson[290903]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:34:06 np0005596060 elegant_williamson[290903]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:34:06 np0005596060 elegant_williamson[290903]:        "osd_id": 1,
Jan 26 13:34:06 np0005596060 elegant_williamson[290903]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:34:06 np0005596060 elegant_williamson[290903]:        "type": "bluestore"
Jan 26 13:34:06 np0005596060 elegant_williamson[290903]:    }
Jan 26 13:34:06 np0005596060 elegant_williamson[290903]: }
Jan 26 13:34:06 np0005596060 systemd[1]: libpod-602b77f3c62538fea7a8a29a2ab50e491c10bac0b87867737aa26a90e98d4818.scope: Deactivated successfully.
Jan 26 13:34:06 np0005596060 podman[290887]: 2026-01-26 18:34:06.030807124 +0000 UTC m=+1.028048414 container died 602b77f3c62538fea7a8a29a2ab50e491c10bac0b87867737aa26a90e98d4818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:34:06 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0d086f907b14ef0ac2d9acedf4ceeefc125e6b5eaad43dd87ac6ac6fbb4a64d9-merged.mount: Deactivated successfully.
Jan 26 13:34:06 np0005596060 podman[290887]: 2026-01-26 18:34:06.096072406 +0000 UTC m=+1.093313686 container remove 602b77f3c62538fea7a8a29a2ab50e491c10bac0b87867737aa26a90e98d4818 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:34:06 np0005596060 systemd[1]: libpod-conmon-602b77f3c62538fea7a8a29a2ab50e491c10bac0b87867737aa26a90e98d4818.scope: Deactivated successfully.
Jan 26 13:34:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:34:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:34:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:34:06 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:34:06 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev eb72fb81-fc63-4d0d-83fa-89c1fbb5c2da does not exist
Jan 26 13:34:06 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e033a410-2c76-4289-801f-4554a6455453 does not exist
Jan 26 13:34:06 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 08d58919-bf3c-4656-81c8-3300b5b8923d does not exist
Jan 26 13:34:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:34:06.462 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:34:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 149 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 13:34:07 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:34:07 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:34:07 np0005596060 nova_compute[247421]: 2026-01-26 18:34:07.208 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:07.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:07.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 26 13:34:08 np0005596060 nova_compute[247421]: 2026-01-26 18:34:08.764 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:09.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:09.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 317 KiB/s wr, 18 op/s
Jan 26 13:34:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:11.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:11.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:12 np0005596060 nova_compute[247421]: 2026-01-26 18:34:12.247 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 317 KiB/s wr, 22 op/s
Jan 26 13:34:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:13.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:13.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:13 np0005596060 nova_compute[247421]: 2026-01-26 18:34:13.766 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:14 np0005596060 nova_compute[247421]: 2026-01-26 18:34:14.059 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:34:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:34:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:34:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:34:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:34:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:34:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:34:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 597 B/s wr, 21 op/s
Jan 26 13:34:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:34:14.761 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:34:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:34:14.762 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:34:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:34:14.762 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:34:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:15.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:15.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 1.9 KiB/s wr, 21 op/s
Jan 26 13:34:17 np0005596060 nova_compute[247421]: 2026-01-26 18:34:17.250 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:17.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:17.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:17 np0005596060 nova_compute[247421]: 2026-01-26 18:34:17.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:34:17 np0005596060 podman[290990]: 2026-01-26 18:34:17.797087512 +0000 UTC m=+0.059147939 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 26 13:34:17 np0005596060 podman[290991]: 2026-01-26 18:34:17.82522465 +0000 UTC m=+0.085081492 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:34:18 np0005596060 nova_compute[247421]: 2026-01-26 18:34:18.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:34:18 np0005596060 nova_compute[247421]: 2026-01-26 18:34:18.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:34:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 2.7 KiB/s wr, 19 op/s
Jan 26 13:34:18 np0005596060 nova_compute[247421]: 2026-01-26 18:34:18.768 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:19.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:19.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:19 np0005596060 nova_compute[247421]: 2026-01-26 18:34:19.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:34:20 np0005596060 nova_compute[247421]: 2026-01-26 18:34:20.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:34:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 2.3 KiB/s wr, 6 op/s
Jan 26 13:34:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:21.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:21.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.251 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:34:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 2.3 KiB/s wr, 6 op/s
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.809 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.809 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.810 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.852 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.852 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.853 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.853 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:34:22 np0005596060 nova_compute[247421]: 2026-01-26 18:34:22.853 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:34:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:34:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2215284500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:34:23 np0005596060 nova_compute[247421]: 2026-01-26 18:34:23.334 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:34:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:23.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:34:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:23.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:34:23 np0005596060 nova_compute[247421]: 2026-01-26 18:34:23.486 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:34:23 np0005596060 nova_compute[247421]: 2026-01-26 18:34:23.488 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4657MB free_disk=20.942726135253906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:34:23 np0005596060 nova_compute[247421]: 2026-01-26 18:34:23.488 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:34:23 np0005596060 nova_compute[247421]: 2026-01-26 18:34:23.488 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:34:23 np0005596060 nova_compute[247421]: 2026-01-26 18:34:23.771 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:23 np0005596060 nova_compute[247421]: 2026-01-26 18:34:23.800 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:34:23 np0005596060 nova_compute[247421]: 2026-01-26 18:34:23.800 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:34:23 np0005596060 nova_compute[247421]: 2026-01-26 18:34:23.900 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:34:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:34:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/487165227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:34:24 np0005596060 nova_compute[247421]: 2026-01-26 18:34:24.367 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:34:24 np0005596060 nova_compute[247421]: 2026-01-26 18:34:24.375 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:34:24 np0005596060 nova_compute[247421]: 2026-01-26 18:34:24.556 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:34:24 np0005596060 nova_compute[247421]: 2026-01-26 18:34:24.634 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:34:24 np0005596060 nova_compute[247421]: 2026-01-26 18:34:24.635 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:34:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 852 B/s rd, 2.3 KiB/s wr, 1 op/s
Jan 26 13:34:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:25.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:25.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:26 np0005596060 nova_compute[247421]: 2026-01-26 18:34:26.631 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:34:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 341 B/s rd, 2.3 KiB/s wr, 1 op/s
Jan 26 13:34:26 np0005596060 nova_compute[247421]: 2026-01-26 18:34:26.787 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:34:27 np0005596060 nova_compute[247421]: 2026-01-26 18:34:27.253 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:27.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:27.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 1023 B/s wr, 0 op/s
Jan 26 13:34:28 np0005596060 nova_compute[247421]: 2026-01-26 18:34:28.775 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:29.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:29.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 13:34:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:31.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:31.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:32 np0005596060 nova_compute[247421]: 2026-01-26 18:34:32.254 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 90 MiB data, 386 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 852 B/s wr, 21 op/s
Jan 26 13:34:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:33.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:33.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:33 np0005596060 nova_compute[247421]: 2026-01-26 18:34:33.777 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 13:34:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:35.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:35.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 51 MiB data, 362 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 169 KiB/s wr, 41 op/s
Jan 26 13:34:37 np0005596060 nova_compute[247421]: 2026-01-26 18:34:37.255 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:37.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:37.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 26 13:34:38 np0005596060 nova_compute[247421]: 2026-01-26 18:34:38.779 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:39.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:39.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:34:39.863 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:34:39 np0005596060 nova_compute[247421]: 2026-01-26 18:34:39.863 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:34:39.864 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:34:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 26 13:34:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:41.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:41.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:42 np0005596060 nova_compute[247421]: 2026-01-26 18:34:42.258 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 26 13:34:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:43.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:43.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:43 np0005596060 nova_compute[247421]: 2026-01-26 18:34:43.782 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:34:44
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms', 'images', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:34:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:34:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:45.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:45.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 136 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 26 13:34:47 np0005596060 nova_compute[247421]: 2026-01-26 18:34:47.259 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:47.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:47.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 80 op/s
Jan 26 13:34:48 np0005596060 nova_compute[247421]: 2026-01-26 18:34:48.784 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:48 np0005596060 podman[291194]: 2026-01-26 18:34:48.85123844 +0000 UTC m=+0.092565101 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:34:48 np0005596060 podman[291195]: 2026-01-26 18:34:48.858319888 +0000 UTC m=+0.099593078 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:34:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:34:48.866 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:34:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:49.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:49.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 66 op/s
Jan 26 13:34:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:51.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:34:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:51.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:34:52 np0005596060 nova_compute[247421]: 2026-01-26 18:34:52.262 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:34:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:53.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:53.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:53 np0005596060 nova_compute[247421]: 2026-01-26 18:34:53.788 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:34:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:55.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:55.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:34:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:34:57 np0005596060 nova_compute[247421]: 2026-01-26 18:34:57.263 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:57.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:57.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 88 MiB data, 382 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 215 KiB/s wr, 71 op/s
Jan 26 13:34:58 np0005596060 nova_compute[247421]: 2026-01-26 18:34:58.790 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:34:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:34:59.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:34:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:34:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:34:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:34:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 88 MiB data, 382 MiB used, 21 GiB / 21 GiB avail; 211 KiB/s rd, 203 KiB/s wr, 12 op/s
Jan 26 13:35:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:35:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:01.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:35:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:01.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:02 np0005596060 nova_compute[247421]: 2026-01-26 18:35:02.267 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 112 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 401 KiB/s rd, 1.7 MiB/s wr, 55 op/s
Jan 26 13:35:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:03.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:03.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:03 np0005596060 nova_compute[247421]: 2026-01-26 18:35:03.793 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019344043713940072 of space, bias 1.0, pg target 0.5803213114182022 quantized to 32 (current 32)
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:35:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:35:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:35:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:05.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:05.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:35:07 np0005596060 nova_compute[247421]: 2026-01-26 18:35:07.270 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:07.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 13:35:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:07.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:35:07 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 810268e2-3f5e-43c6-8a0f-e971fe1d80b0 does not exist
Jan 26 13:35:07 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8ab0a271-c89e-48cd-b349-69f96584158b does not exist
Jan 26 13:35:07 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b300d18f-1af6-4d00-8a88-5cf356ec9753 does not exist
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:35:07 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:35:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:35:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:35:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 13:35:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 13:35:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:35:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:35:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:35:08 np0005596060 podman[291569]: 2026-01-26 18:35:08.590743053 +0000 UTC m=+0.049089446 container create 8528732f11b1173965568de887b73e7917bf1cd545a84af025f50986e6cf914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 26 13:35:08 np0005596060 systemd[1]: Started libpod-conmon-8528732f11b1173965568de887b73e7917bf1cd545a84af025f50986e6cf914d.scope.
Jan 26 13:35:08 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:35:08 np0005596060 podman[291569]: 2026-01-26 18:35:08.566960574 +0000 UTC m=+0.025306997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:35:08 np0005596060 podman[291569]: 2026-01-26 18:35:08.681836076 +0000 UTC m=+0.140182499 container init 8528732f11b1173965568de887b73e7917bf1cd545a84af025f50986e6cf914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:35:08 np0005596060 podman[291569]: 2026-01-26 18:35:08.690419892 +0000 UTC m=+0.148766285 container start 8528732f11b1173965568de887b73e7917bf1cd545a84af025f50986e6cf914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 13:35:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:35:08 np0005596060 podman[291569]: 2026-01-26 18:35:08.695042948 +0000 UTC m=+0.153389431 container attach 8528732f11b1173965568de887b73e7917bf1cd545a84af025f50986e6cf914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 26 13:35:08 np0005596060 adoring_ptolemy[291585]: 167 167
Jan 26 13:35:08 np0005596060 systemd[1]: libpod-8528732f11b1173965568de887b73e7917bf1cd545a84af025f50986e6cf914d.scope: Deactivated successfully.
Jan 26 13:35:08 np0005596060 conmon[291585]: conmon 8528732f11b117396556 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8528732f11b1173965568de887b73e7917bf1cd545a84af025f50986e6cf914d.scope/container/memory.events
Jan 26 13:35:08 np0005596060 podman[291569]: 2026-01-26 18:35:08.699656054 +0000 UTC m=+0.158002467 container died 8528732f11b1173965568de887b73e7917bf1cd545a84af025f50986e6cf914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:35:08 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d675b1fb45eb1a90483f75fefd7ef82dea8b7a45b9622b01cc94534f02568700-merged.mount: Deactivated successfully.
Jan 26 13:35:08 np0005596060 podman[291569]: 2026-01-26 18:35:08.748301128 +0000 UTC m=+0.206647521 container remove 8528732f11b1173965568de887b73e7917bf1cd545a84af025f50986e6cf914d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:35:08 np0005596060 systemd[1]: libpod-conmon-8528732f11b1173965568de887b73e7917bf1cd545a84af025f50986e6cf914d.scope: Deactivated successfully.
Jan 26 13:35:08 np0005596060 nova_compute[247421]: 2026-01-26 18:35:08.797 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:08 np0005596060 podman[291609]: 2026-01-26 18:35:08.932883224 +0000 UTC m=+0.053888627 container create c9ee80a2ab28f8761d4f4c6fbfcddc80c4c1549982d37a42f9dbd9fd1337f0df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:35:08 np0005596060 systemd[1]: Started libpod-conmon-c9ee80a2ab28f8761d4f4c6fbfcddc80c4c1549982d37a42f9dbd9fd1337f0df.scope.
Jan 26 13:35:09 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:35:09 np0005596060 podman[291609]: 2026-01-26 18:35:08.910160122 +0000 UTC m=+0.031165535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:35:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0066ea7cd6d97843d075856e3e545754a05f3e4b1217d2823d531067d72080b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0066ea7cd6d97843d075856e3e545754a05f3e4b1217d2823d531067d72080b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0066ea7cd6d97843d075856e3e545754a05f3e4b1217d2823d531067d72080b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0066ea7cd6d97843d075856e3e545754a05f3e4b1217d2823d531067d72080b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0066ea7cd6d97843d075856e3e545754a05f3e4b1217d2823d531067d72080b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:09 np0005596060 podman[291609]: 2026-01-26 18:35:09.023430883 +0000 UTC m=+0.144436316 container init c9ee80a2ab28f8761d4f4c6fbfcddc80c4c1549982d37a42f9dbd9fd1337f0df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:35:09 np0005596060 podman[291609]: 2026-01-26 18:35:09.032995464 +0000 UTC m=+0.154000867 container start c9ee80a2ab28f8761d4f4c6fbfcddc80c4c1549982d37a42f9dbd9fd1337f0df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 13:35:09 np0005596060 podman[291609]: 2026-01-26 18:35:09.03684347 +0000 UTC m=+0.157848883 container attach c9ee80a2ab28f8761d4f4c6fbfcddc80c4c1549982d37a42f9dbd9fd1337f0df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:35:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:09.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:35:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:09.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:35:09 np0005596060 tender_wright[291626]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:35:09 np0005596060 tender_wright[291626]: --> relative data size: 1.0
Jan 26 13:35:09 np0005596060 tender_wright[291626]: --> All data devices are unavailable
Jan 26 13:35:10 np0005596060 systemd[1]: libpod-c9ee80a2ab28f8761d4f4c6fbfcddc80c4c1549982d37a42f9dbd9fd1337f0df.scope: Deactivated successfully.
Jan 26 13:35:10 np0005596060 podman[291609]: 2026-01-26 18:35:10.025127923 +0000 UTC m=+1.146133346 container died c9ee80a2ab28f8761d4f4c6fbfcddc80c4c1549982d37a42f9dbd9fd1337f0df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:35:10 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b0066ea7cd6d97843d075856e3e545754a05f3e4b1217d2823d531067d72080b-merged.mount: Deactivated successfully.
Jan 26 13:35:10 np0005596060 podman[291609]: 2026-01-26 18:35:10.096434038 +0000 UTC m=+1.217439441 container remove c9ee80a2ab28f8761d4f4c6fbfcddc80c4c1549982d37a42f9dbd9fd1337f0df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:35:10 np0005596060 systemd[1]: libpod-conmon-c9ee80a2ab28f8761d4f4c6fbfcddc80c4c1549982d37a42f9dbd9fd1337f0df.scope: Deactivated successfully.
Jan 26 13:35:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 309 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Jan 26 13:35:10 np0005596060 podman[291791]: 2026-01-26 18:35:10.727871729 +0000 UTC m=+0.023845041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:35:11 np0005596060 podman[291791]: 2026-01-26 18:35:11.011060506 +0000 UTC m=+0.307033788 container create 2f96c2eb4f4bb7f0345439b3d846ca537be8b0e8aca22406d660f1595bbb6489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:35:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:11 np0005596060 systemd[1]: Started libpod-conmon-2f96c2eb4f4bb7f0345439b3d846ca537be8b0e8aca22406d660f1595bbb6489.scope.
Jan 26 13:35:11 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:35:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:11.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:11.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:11 np0005596060 podman[291791]: 2026-01-26 18:35:11.586711944 +0000 UTC m=+0.882685246 container init 2f96c2eb4f4bb7f0345439b3d846ca537be8b0e8aca22406d660f1595bbb6489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 13:35:11 np0005596060 podman[291791]: 2026-01-26 18:35:11.596346346 +0000 UTC m=+0.892319628 container start 2f96c2eb4f4bb7f0345439b3d846ca537be8b0e8aca22406d660f1595bbb6489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galois, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:35:11 np0005596060 angry_galois[291807]: 167 167
Jan 26 13:35:11 np0005596060 systemd[1]: libpod-2f96c2eb4f4bb7f0345439b3d846ca537be8b0e8aca22406d660f1595bbb6489.scope: Deactivated successfully.
Jan 26 13:35:11 np0005596060 podman[291791]: 2026-01-26 18:35:11.603575158 +0000 UTC m=+0.899548440 container attach 2f96c2eb4f4bb7f0345439b3d846ca537be8b0e8aca22406d660f1595bbb6489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:35:11 np0005596060 podman[291791]: 2026-01-26 18:35:11.604143382 +0000 UTC m=+0.900116664 container died 2f96c2eb4f4bb7f0345439b3d846ca537be8b0e8aca22406d660f1595bbb6489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galois, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:35:11 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a7e67924541901d9ee0ee3637c89f4629fa05dc0d5cf9569b415267902ea937a-merged.mount: Deactivated successfully.
Jan 26 13:35:11 np0005596060 podman[291791]: 2026-01-26 18:35:11.72643605 +0000 UTC m=+1.022409332 container remove 2f96c2eb4f4bb7f0345439b3d846ca537be8b0e8aca22406d660f1595bbb6489 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 13:35:11 np0005596060 systemd[1]: libpod-conmon-2f96c2eb4f4bb7f0345439b3d846ca537be8b0e8aca22406d660f1595bbb6489.scope: Deactivated successfully.
Jan 26 13:35:11 np0005596060 podman[291831]: 2026-01-26 18:35:11.950541811 +0000 UTC m=+0.091143515 container create afe5e6f31f472e2d566f81667ce5cc5f2fb9a500b40c4244ac7b5a0575e3907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:35:11 np0005596060 podman[291831]: 2026-01-26 18:35:11.884147699 +0000 UTC m=+0.024749423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:35:11 np0005596060 systemd[1]: Started libpod-conmon-afe5e6f31f472e2d566f81667ce5cc5f2fb9a500b40c4244ac7b5a0575e3907f.scope.
Jan 26 13:35:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:35:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94943a4fd7c3084409b05664c618ee816c3b2c1f2cfaf4cc93cadbdaa1aecbd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94943a4fd7c3084409b05664c618ee816c3b2c1f2cfaf4cc93cadbdaa1aecbd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94943a4fd7c3084409b05664c618ee816c3b2c1f2cfaf4cc93cadbdaa1aecbd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94943a4fd7c3084409b05664c618ee816c3b2c1f2cfaf4cc93cadbdaa1aecbd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:12 np0005596060 podman[291831]: 2026-01-26 18:35:12.097509589 +0000 UTC m=+0.238111313 container init afe5e6f31f472e2d566f81667ce5cc5f2fb9a500b40c4244ac7b5a0575e3907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:35:12 np0005596060 podman[291831]: 2026-01-26 18:35:12.105054969 +0000 UTC m=+0.245656673 container start afe5e6f31f472e2d566f81667ce5cc5f2fb9a500b40c4244ac7b5a0575e3907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:35:12 np0005596060 podman[291831]: 2026-01-26 18:35:12.108578368 +0000 UTC m=+0.249180102 container attach afe5e6f31f472e2d566f81667ce5cc5f2fb9a500b40c4244ac7b5a0575e3907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_roentgen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:35:12 np0005596060 nova_compute[247421]: 2026-01-26 18:35:12.272 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:12 np0005596060 nova_compute[247421]: 2026-01-26 18:35:12.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:35:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 310 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]: {
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:    "1": [
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:        {
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "devices": [
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "/dev/loop3"
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            ],
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "lv_name": "ceph_lv0",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "lv_size": "7511998464",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "name": "ceph_lv0",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "tags": {
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.cluster_name": "ceph",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.crush_device_class": "",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.encrypted": "0",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.osd_id": "1",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.type": "block",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:                "ceph.vdo": "0"
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            },
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "type": "block",
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:            "vg_name": "ceph_vg0"
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:        }
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]:    ]
Jan 26 13:35:12 np0005596060 zen_roentgen[291847]: }
Jan 26 13:35:12 np0005596060 systemd[1]: libpod-afe5e6f31f472e2d566f81667ce5cc5f2fb9a500b40c4244ac7b5a0575e3907f.scope: Deactivated successfully.
Jan 26 13:35:12 np0005596060 podman[291831]: 2026-01-26 18:35:12.933480209 +0000 UTC m=+1.074081903 container died afe5e6f31f472e2d566f81667ce5cc5f2fb9a500b40c4244ac7b5a0575e3907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_roentgen, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:35:12 np0005596060 systemd[1]: var-lib-containers-storage-overlay-94943a4fd7c3084409b05664c618ee816c3b2c1f2cfaf4cc93cadbdaa1aecbd6-merged.mount: Deactivated successfully.
Jan 26 13:35:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:35:13Z|00138|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 26 13:35:13 np0005596060 podman[291831]: 2026-01-26 18:35:13.030125651 +0000 UTC m=+1.170727355 container remove afe5e6f31f472e2d566f81667ce5cc5f2fb9a500b40c4244ac7b5a0575e3907f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:35:13 np0005596060 systemd[1]: libpod-conmon-afe5e6f31f472e2d566f81667ce5cc5f2fb9a500b40c4244ac7b5a0575e3907f.scope: Deactivated successfully.
Jan 26 13:35:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:35:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:13.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:35:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:13.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:13 np0005596060 podman[292006]: 2026-01-26 18:35:13.639642001 +0000 UTC m=+0.038440268 container create a1102aefa80276a4475db866a4baa16c8487b9d81a11b214f2c06b91f4ff3370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:35:13 np0005596060 systemd[1]: Started libpod-conmon-a1102aefa80276a4475db866a4baa16c8487b9d81a11b214f2c06b91f4ff3370.scope.
Jan 26 13:35:13 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:35:13 np0005596060 podman[292006]: 2026-01-26 18:35:13.716948957 +0000 UTC m=+0.115747254 container init a1102aefa80276a4475db866a4baa16c8487b9d81a11b214f2c06b91f4ff3370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:35:13 np0005596060 podman[292006]: 2026-01-26 18:35:13.622438138 +0000 UTC m=+0.021236425 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:35:13 np0005596060 podman[292006]: 2026-01-26 18:35:13.724835645 +0000 UTC m=+0.123633912 container start a1102aefa80276a4475db866a4baa16c8487b9d81a11b214f2c06b91f4ff3370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:35:13 np0005596060 podman[292006]: 2026-01-26 18:35:13.727936573 +0000 UTC m=+0.126734870 container attach a1102aefa80276a4475db866a4baa16c8487b9d81a11b214f2c06b91f4ff3370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:35:13 np0005596060 intelligent_tharp[292023]: 167 167
Jan 26 13:35:13 np0005596060 systemd[1]: libpod-a1102aefa80276a4475db866a4baa16c8487b9d81a11b214f2c06b91f4ff3370.scope: Deactivated successfully.
Jan 26 13:35:13 np0005596060 podman[292006]: 2026-01-26 18:35:13.730715003 +0000 UTC m=+0.129513270 container died a1102aefa80276a4475db866a4baa16c8487b9d81a11b214f2c06b91f4ff3370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 26 13:35:13 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0dc502bd422dd3386866641546f7cfc612e80602947bfeff9d97f5e37933fa7b-merged.mount: Deactivated successfully.
Jan 26 13:35:13 np0005596060 podman[292006]: 2026-01-26 18:35:13.766404641 +0000 UTC m=+0.165202908 container remove a1102aefa80276a4475db866a4baa16c8487b9d81a11b214f2c06b91f4ff3370 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:35:13 np0005596060 systemd[1]: libpod-conmon-a1102aefa80276a4475db866a4baa16c8487b9d81a11b214f2c06b91f4ff3370.scope: Deactivated successfully.
Jan 26 13:35:13 np0005596060 nova_compute[247421]: 2026-01-26 18:35:13.800 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:13 np0005596060 podman[292046]: 2026-01-26 18:35:13.923472785 +0000 UTC m=+0.038131341 container create 28edf52154562433d30729b25f9e0c327552a28f342d09400e37a846249773c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:35:13 np0005596060 systemd[1]: Started libpod-conmon-28edf52154562433d30729b25f9e0c327552a28f342d09400e37a846249773c8.scope.
Jan 26 13:35:13 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:35:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0f47da0b0e8ba8e78af912bca6aed78bb37b3ea23eca188e5186c0931a4da5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0f47da0b0e8ba8e78af912bca6aed78bb37b3ea23eca188e5186c0931a4da5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0f47da0b0e8ba8e78af912bca6aed78bb37b3ea23eca188e5186c0931a4da5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0f47da0b0e8ba8e78af912bca6aed78bb37b3ea23eca188e5186c0931a4da5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:35:13 np0005596060 podman[292046]: 2026-01-26 18:35:13.994509272 +0000 UTC m=+0.109167848 container init 28edf52154562433d30729b25f9e0c327552a28f342d09400e37a846249773c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:35:14 np0005596060 podman[292046]: 2026-01-26 18:35:14.003746995 +0000 UTC m=+0.118405551 container start 28edf52154562433d30729b25f9e0c327552a28f342d09400e37a846249773c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 13:35:14 np0005596060 podman[292046]: 2026-01-26 18:35:13.907932813 +0000 UTC m=+0.022591399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:35:14 np0005596060 podman[292046]: 2026-01-26 18:35:14.007194652 +0000 UTC m=+0.121853238 container attach 28edf52154562433d30729b25f9e0c327552a28f342d09400e37a846249773c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:35:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:35:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:35:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:35:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:35:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:35:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:35:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 119 KiB/s rd, 456 KiB/s wr, 15 op/s
Jan 26 13:35:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:35:14.762 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:35:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:35:14.763 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:35:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:35:14.764 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:35:14 np0005596060 vigorous_allen[292063]: {
Jan 26 13:35:14 np0005596060 vigorous_allen[292063]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:35:14 np0005596060 vigorous_allen[292063]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:35:14 np0005596060 vigorous_allen[292063]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:35:14 np0005596060 vigorous_allen[292063]:        "osd_id": 1,
Jan 26 13:35:14 np0005596060 vigorous_allen[292063]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:35:14 np0005596060 vigorous_allen[292063]:        "type": "bluestore"
Jan 26 13:35:14 np0005596060 vigorous_allen[292063]:    }
Jan 26 13:35:14 np0005596060 vigorous_allen[292063]: }
Jan 26 13:35:14 np0005596060 systemd[1]: libpod-28edf52154562433d30729b25f9e0c327552a28f342d09400e37a846249773c8.scope: Deactivated successfully.
Jan 26 13:35:14 np0005596060 conmon[292063]: conmon 28edf52154562433d307 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28edf52154562433d30729b25f9e0c327552a28f342d09400e37a846249773c8.scope/container/memory.events
Jan 26 13:35:14 np0005596060 podman[292046]: 2026-01-26 18:35:14.85271736 +0000 UTC m=+0.967375926 container died 28edf52154562433d30729b25f9e0c327552a28f342d09400e37a846249773c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 13:35:14 np0005596060 systemd[1]: var-lib-containers-storage-overlay-fd0f47da0b0e8ba8e78af912bca6aed78bb37b3ea23eca188e5186c0931a4da5-merged.mount: Deactivated successfully.
Jan 26 13:35:14 np0005596060 podman[292046]: 2026-01-26 18:35:14.907098439 +0000 UTC m=+1.021756995 container remove 28edf52154562433d30729b25f9e0c327552a28f342d09400e37a846249773c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:35:14 np0005596060 systemd[1]: libpod-conmon-28edf52154562433d30729b25f9e0c327552a28f342d09400e37a846249773c8.scope: Deactivated successfully.
Jan 26 13:35:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:35:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:35:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:35:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:35:15 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev be35c2d0-b887-4e2c-8fc4-94167346ee63 does not exist
Jan 26 13:35:15 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 0a6fd959-3b6d-496a-8e8c-367a21781220 does not exist
Jan 26 13:35:15 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 121d1321-afaf-4381-a94d-14c2dab05e0a does not exist
Jan 26 13:35:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:15.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:35:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:15.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:35:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:35:15 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:35:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 26 13:35:17 np0005596060 nova_compute[247421]: 2026-01-26 18:35:17.274 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:17.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:35:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:17.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:35:18 np0005596060 nova_compute[247421]: 2026-01-26 18:35:18.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:35:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 26 13:35:18 np0005596060 nova_compute[247421]: 2026-01-26 18:35:18.802 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:19.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:19.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:19 np0005596060 podman[292149]: 2026-01-26 18:35:19.801293715 +0000 UTC m=+0.060237407 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 13:35:19 np0005596060 podman[292150]: 2026-01-26 18:35:19.828304565 +0000 UTC m=+0.086367025 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 13:35:20 np0005596060 nova_compute[247421]: 2026-01-26 18:35:20.190 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:20 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:35:20.196 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:35:20 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:35:20.197 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:35:20 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:35:20.197 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:35:20 np0005596060 nova_compute[247421]: 2026-01-26 18:35:20.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:35:20 np0005596060 nova_compute[247421]: 2026-01-26 18:35:20.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:35:20 np0005596060 nova_compute[247421]: 2026-01-26 18:35:20.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:35:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Jan 26 13:35:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:21.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:21.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:22 np0005596060 nova_compute[247421]: 2026-01-26 18:35:22.326 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:22 np0005596060 nova_compute[247421]: 2026-01-26 18:35:22.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:35:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Jan 26 13:35:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:23.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:23.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:23 np0005596060 nova_compute[247421]: 2026-01-26 18:35:23.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:35:23 np0005596060 nova_compute[247421]: 2026-01-26 18:35:23.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:35:23 np0005596060 nova_compute[247421]: 2026-01-26 18:35:23.804 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 26 13:35:24 np0005596060 nova_compute[247421]: 2026-01-26 18:35:24.702 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:35:24 np0005596060 nova_compute[247421]: 2026-01-26 18:35:24.703 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:35:24 np0005596060 nova_compute[247421]: 2026-01-26 18:35:24.703 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:35:24 np0005596060 nova_compute[247421]: 2026-01-26 18:35:24.703 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:35:24 np0005596060 nova_compute[247421]: 2026-01-26 18:35:24.703 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:35:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:35:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2555989420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.266 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.432 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.434 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4620MB free_disk=20.942729949951172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.434 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.434 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:35:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:25.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.503 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.504 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.524 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:35:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:25.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:35:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520444626' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.953 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.958 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.974 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.976 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:35:25 np0005596060 nova_compute[247421]: 2026-01-26 18:35:25.976 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:35:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 99 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 7.6 KiB/s rd, 3.2 KiB/s wr, 10 op/s
Jan 26 13:35:26 np0005596060 nova_compute[247421]: 2026-01-26 18:35:26.978 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:35:26 np0005596060 nova_compute[247421]: 2026-01-26 18:35:26.978 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:35:26 np0005596060 nova_compute[247421]: 2026-01-26 18:35:26.978 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:35:27 np0005596060 nova_compute[247421]: 2026-01-26 18:35:27.328 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:35:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:27.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:35:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:27.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 KiB/s wr, 24 op/s
Jan 26 13:35:28 np0005596060 nova_compute[247421]: 2026-01-26 18:35:28.807 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:29 np0005596060 nova_compute[247421]: 2026-01-26 18:35:29.423 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:35:29 np0005596060 nova_compute[247421]: 2026-01-26 18:35:29.424 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:35:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:29.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:29.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.6 KiB/s wr, 24 op/s
Jan 26 13:35:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:31.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:31.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:32 np0005596060 nova_compute[247421]: 2026-01-26 18:35:32.330 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 13:35:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:35:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:33.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:35:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:35:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:33.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:35:33 np0005596060 nova_compute[247421]: 2026-01-26 18:35:33.810 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 13:35:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:35.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:35.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 13:35:37 np0005596060 nova_compute[247421]: 2026-01-26 18:35:37.332 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:35:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:37.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:35:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:35:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:37.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:35:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 11 KiB/s rd, 938 B/s wr, 17 op/s
Jan 26 13:35:38 np0005596060 nova_compute[247421]: 2026-01-26 18:35:38.813 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:39.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:39.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:35:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2489620099' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:35:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:35:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2489620099' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:35:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 3 op/s
Jan 26 13:35:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:35:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:41.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:35:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:41.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:42 np0005596060 nova_compute[247421]: 2026-01-26 18:35:42.333 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 3 op/s
Jan 26 13:35:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:43.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:43.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:43 np0005596060 nova_compute[247421]: 2026-01-26 18:35:43.815 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:35:44
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'vms', 'volumes', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.meta']
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:35:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:35:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:35:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:45.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:35:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:35:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:45.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:35:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:35:47 np0005596060 nova_compute[247421]: 2026-01-26 18:35:47.337 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:47.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:35:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:47.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:35:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:35:48 np0005596060 nova_compute[247421]: 2026-01-26 18:35:48.817 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:49.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:49.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:35:50 np0005596060 podman[292352]: 2026-01-26 18:35:50.825264283 +0000 UTC m=+0.082381105 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 26 13:35:50 np0005596060 podman[292353]: 2026-01-26 18:35:50.840464065 +0000 UTC m=+0.086504028 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 26 13:35:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:51.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:51.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:52 np0005596060 nova_compute[247421]: 2026-01-26 18:35:52.339 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:35:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:53.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:53.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:53 np0005596060 nova_compute[247421]: 2026-01-26 18:35:53.820 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:35:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:55.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:55.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:35:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:35:57 np0005596060 nova_compute[247421]: 2026-01-26 18:35:57.373 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:57.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:57.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:35:58 np0005596060 nova_compute[247421]: 2026-01-26 18:35:58.822 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:35:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:35:59.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:35:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:35:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:35:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:35:59.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:01.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:01.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:02 np0005596060 nova_compute[247421]: 2026-01-26 18:36:02.374 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 26 13:36:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:03.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:03.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:03 np0005596060 nova_compute[247421]: 2026-01-26 18:36:03.826 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:36:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:36:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 26 13:36:05 np0005596060 ceph-mds[93477]: mds.beacon.cephfs.compute-0.wenkwv missed beacon ack from the monitors
Jan 26 13:36:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:05.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:06 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 26 13:36:06 np0005596060 ceph-mon[74267]: paxos.0).electionLogic(19) init, last seen epoch 19, mid-election, bumping
Jan 26 13:36:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 13:36:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:36:07 np0005596060 nova_compute[247421]: 2026-01-26 18:36:07.376 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:07.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:36:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:36:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:36:08 np0005596060 nova_compute[247421]: 2026-01-26 18:36:08.827 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:09 np0005596060 ceph-mds[93477]: mds.beacon.cephfs.compute-0.wenkwv missed beacon ack from the monitors
Jan 26 13:36:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:09.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:09.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.oqvedy=up:active} 2 up:standby
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.mbryrf(active, since 56m), standbys: compute-2.cchxrf, compute-1.qpyzhk
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Jan 26 13:36:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:11.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:11.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:12 np0005596060 nova_compute[247421]: 2026-01-26 18:36:12.377 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:12 np0005596060 ceph-mon[74267]: mon.compute-2 calling monitor election
Jan 26 13:36:12 np0005596060 ceph-mon[74267]: mon.compute-0 calling monitor election
Jan 26 13:36:12 np0005596060 ceph-mon[74267]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 26 13:36:12 np0005596060 ceph-mon[74267]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Jan 26 13:36:12 np0005596060 ceph-mon[74267]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2
Jan 26 13:36:12 np0005596060 ceph-mon[74267]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Jan 26 13:36:12 np0005596060 ceph-mon[74267]:    mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Jan 26 13:36:12 np0005596060 nova_compute[247421]: 2026-01-26 18:36:12.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:36:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:36:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:13.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:13.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:13 np0005596060 nova_compute[247421]: 2026-01-26 18:36:13.830 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:36:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:36:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:36:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:36:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:36:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:36:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 26 13:36:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:36:14.763 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:36:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:36:14.764 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:36:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:36:14.764 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:36:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:15.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:15.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:36:16 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev c78e3e52-5b4d-47fc-b3f2-5403dc0ba5c8 does not exist
Jan 26 13:36:16 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 24135323-7856-41e6-b46b-d605b5332302 does not exist
Jan 26 13:36:16 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3f64c07d-2ccb-41b6-940e-af4ad99ddfdf does not exist
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:36:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 85 B/s wr, 0 op/s
Jan 26 13:36:16 np0005596060 podman[292734]: 2026-01-26 18:36:16.969776086 +0000 UTC m=+0.039915346 container create 1cb68c0a68de8844a1c16a140bc80cfc503290fb8e2ebc3461191d4fbad7d005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 13:36:17 np0005596060 systemd[1]: Started libpod-conmon-1cb68c0a68de8844a1c16a140bc80cfc503290fb8e2ebc3461191d4fbad7d005.scope.
Jan 26 13:36:17 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:36:17 np0005596060 podman[292734]: 2026-01-26 18:36:16.951920157 +0000 UTC m=+0.022059447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:36:17 np0005596060 podman[292734]: 2026-01-26 18:36:17.056900179 +0000 UTC m=+0.127039459 container init 1cb68c0a68de8844a1c16a140bc80cfc503290fb8e2ebc3461191d4fbad7d005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:36:17 np0005596060 podman[292734]: 2026-01-26 18:36:17.065030183 +0000 UTC m=+0.135169433 container start 1cb68c0a68de8844a1c16a140bc80cfc503290fb8e2ebc3461191d4fbad7d005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 13:36:17 np0005596060 podman[292734]: 2026-01-26 18:36:17.068429369 +0000 UTC m=+0.138568659 container attach 1cb68c0a68de8844a1c16a140bc80cfc503290fb8e2ebc3461191d4fbad7d005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 13:36:17 np0005596060 competent_nobel[292750]: 167 167
Jan 26 13:36:17 np0005596060 systemd[1]: libpod-1cb68c0a68de8844a1c16a140bc80cfc503290fb8e2ebc3461191d4fbad7d005.scope: Deactivated successfully.
Jan 26 13:36:17 np0005596060 podman[292734]: 2026-01-26 18:36:17.071545797 +0000 UTC m=+0.141685057 container died 1cb68c0a68de8844a1c16a140bc80cfc503290fb8e2ebc3461191d4fbad7d005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:36:17 np0005596060 systemd[1]: var-lib-containers-storage-overlay-98a5b4f9fbd231b1675eaec8647bcc7321e5a63558b52c5ea4a1b448c35015a9-merged.mount: Deactivated successfully.
Jan 26 13:36:17 np0005596060 podman[292734]: 2026-01-26 18:36:17.109584005 +0000 UTC m=+0.179723265 container remove 1cb68c0a68de8844a1c16a140bc80cfc503290fb8e2ebc3461191d4fbad7d005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:36:17 np0005596060 systemd[1]: libpod-conmon-1cb68c0a68de8844a1c16a140bc80cfc503290fb8e2ebc3461191d4fbad7d005.scope: Deactivated successfully.
Jan 26 13:36:17 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 13:36:17 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:36:17 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:36:17 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:36:17 np0005596060 podman[292773]: 2026-01-26 18:36:17.265654643 +0000 UTC m=+0.041363962 container create e98aab523b6081e1b1d0a7feac975ecbf387160a658c41c182ffff58ff6c8f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 26 13:36:17 np0005596060 systemd[1]: Started libpod-conmon-e98aab523b6081e1b1d0a7feac975ecbf387160a658c41c182ffff58ff6c8f7d.scope.
Jan 26 13:36:17 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:36:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117aee076cfe910c6fdc0eb4e5303b2edb8e0c36de584ce863fe055a92f43341/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117aee076cfe910c6fdc0eb4e5303b2edb8e0c36de584ce863fe055a92f43341/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117aee076cfe910c6fdc0eb4e5303b2edb8e0c36de584ce863fe055a92f43341/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117aee076cfe910c6fdc0eb4e5303b2edb8e0c36de584ce863fe055a92f43341/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117aee076cfe910c6fdc0eb4e5303b2edb8e0c36de584ce863fe055a92f43341/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:17 np0005596060 podman[292773]: 2026-01-26 18:36:17.249424684 +0000 UTC m=+0.025133993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:36:17 np0005596060 podman[292773]: 2026-01-26 18:36:17.347962644 +0000 UTC m=+0.123671943 container init e98aab523b6081e1b1d0a7feac975ecbf387160a658c41c182ffff58ff6c8f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:36:17 np0005596060 podman[292773]: 2026-01-26 18:36:17.35771929 +0000 UTC m=+0.133428609 container start e98aab523b6081e1b1d0a7feac975ecbf387160a658c41c182ffff58ff6c8f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:36:17 np0005596060 podman[292773]: 2026-01-26 18:36:17.361396132 +0000 UTC m=+0.137105451 container attach e98aab523b6081e1b1d0a7feac975ecbf387160a658c41c182ffff58ff6c8f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:36:17 np0005596060 nova_compute[247421]: 2026-01-26 18:36:17.380 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:17.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:36:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:17.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:36:18 np0005596060 vigilant_haslett[292789]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:36:18 np0005596060 vigilant_haslett[292789]: --> relative data size: 1.0
Jan 26 13:36:18 np0005596060 vigilant_haslett[292789]: --> All data devices are unavailable
Jan 26 13:36:18 np0005596060 systemd[1]: libpod-e98aab523b6081e1b1d0a7feac975ecbf387160a658c41c182ffff58ff6c8f7d.scope: Deactivated successfully.
Jan 26 13:36:18 np0005596060 podman[292804]: 2026-01-26 18:36:18.221405077 +0000 UTC m=+0.023259017 container died e98aab523b6081e1b1d0a7feac975ecbf387160a658c41c182ffff58ff6c8f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:36:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-117aee076cfe910c6fdc0eb4e5303b2edb8e0c36de584ce863fe055a92f43341-merged.mount: Deactivated successfully.
Jan 26 13:36:18 np0005596060 podman[292804]: 2026-01-26 18:36:18.276686538 +0000 UTC m=+0.078540458 container remove e98aab523b6081e1b1d0a7feac975ecbf387160a658c41c182ffff58ff6c8f7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_haslett, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:36:18 np0005596060 systemd[1]: libpod-conmon-e98aab523b6081e1b1d0a7feac975ecbf387160a658c41c182ffff58ff6c8f7d.scope: Deactivated successfully.
Jan 26 13:36:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:18 np0005596060 nova_compute[247421]: 2026-01-26 18:36:18.832 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:18 np0005596060 podman[292961]: 2026-01-26 18:36:18.867949218 +0000 UTC m=+0.033540025 container create caf7a7a81f4e98da8ea0214f4ba9dc1b167407bbd9b5c2eabe28ce16870a9ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 13:36:18 np0005596060 systemd[1]: Started libpod-conmon-caf7a7a81f4e98da8ea0214f4ba9dc1b167407bbd9b5c2eabe28ce16870a9ffb.scope.
Jan 26 13:36:18 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:36:18 np0005596060 podman[292961]: 2026-01-26 18:36:18.937264602 +0000 UTC m=+0.102855429 container init caf7a7a81f4e98da8ea0214f4ba9dc1b167407bbd9b5c2eabe28ce16870a9ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:36:18 np0005596060 podman[292961]: 2026-01-26 18:36:18.94314995 +0000 UTC m=+0.108740757 container start caf7a7a81f4e98da8ea0214f4ba9dc1b167407bbd9b5c2eabe28ce16870a9ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 13:36:18 np0005596060 blissful_golick[292979]: 167 167
Jan 26 13:36:18 np0005596060 systemd[1]: libpod-caf7a7a81f4e98da8ea0214f4ba9dc1b167407bbd9b5c2eabe28ce16870a9ffb.scope: Deactivated successfully.
Jan 26 13:36:18 np0005596060 podman[292961]: 2026-01-26 18:36:18.853837553 +0000 UTC m=+0.019428380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:36:18 np0005596060 podman[292961]: 2026-01-26 18:36:18.957969743 +0000 UTC m=+0.123560590 container attach caf7a7a81f4e98da8ea0214f4ba9dc1b167407bbd9b5c2eabe28ce16870a9ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 13:36:18 np0005596060 podman[292961]: 2026-01-26 18:36:18.958450396 +0000 UTC m=+0.124041213 container died caf7a7a81f4e98da8ea0214f4ba9dc1b167407bbd9b5c2eabe28ce16870a9ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 13:36:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-14a98e1709245141a940ad8bae8cde3f687e66d4d771d0bf97aa3ffe2bee2ca7-merged.mount: Deactivated successfully.
Jan 26 13:36:19 np0005596060 podman[292961]: 2026-01-26 18:36:19.00034246 +0000 UTC m=+0.165933277 container remove caf7a7a81f4e98da8ea0214f4ba9dc1b167407bbd9b5c2eabe28ce16870a9ffb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 26 13:36:19 np0005596060 systemd[1]: libpod-conmon-caf7a7a81f4e98da8ea0214f4ba9dc1b167407bbd9b5c2eabe28ce16870a9ffb.scope: Deactivated successfully.
Jan 26 13:36:19 np0005596060 podman[293005]: 2026-01-26 18:36:19.192845165 +0000 UTC m=+0.038213803 container create c69d2eb41aae9ffeca9986710f747e3b3ece8d099fcfbf3b41e4faded33472c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elgamal, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:36:19 np0005596060 systemd[1]: Started libpod-conmon-c69d2eb41aae9ffeca9986710f747e3b3ece8d099fcfbf3b41e4faded33472c1.scope.
Jan 26 13:36:19 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:36:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3006b63bcb7fe9f8daefc5a33ee4850551c8b2dd284dcb2767c41552249187b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3006b63bcb7fe9f8daefc5a33ee4850551c8b2dd284dcb2767c41552249187b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3006b63bcb7fe9f8daefc5a33ee4850551c8b2dd284dcb2767c41552249187b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:19 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3006b63bcb7fe9f8daefc5a33ee4850551c8b2dd284dcb2767c41552249187b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:19 np0005596060 podman[293005]: 2026-01-26 18:36:19.175721874 +0000 UTC m=+0.021090532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:36:19 np0005596060 podman[293005]: 2026-01-26 18:36:19.286575644 +0000 UTC m=+0.131944322 container init c69d2eb41aae9ffeca9986710f747e3b3ece8d099fcfbf3b41e4faded33472c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:36:19 np0005596060 podman[293005]: 2026-01-26 18:36:19.292597435 +0000 UTC m=+0.137966073 container start c69d2eb41aae9ffeca9986710f747e3b3ece8d099fcfbf3b41e4faded33472c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elgamal, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 26 13:36:19 np0005596060 podman[293005]: 2026-01-26 18:36:19.29556373 +0000 UTC m=+0.140932388 container attach c69d2eb41aae9ffeca9986710f747e3b3ece8d099fcfbf3b41e4faded33472c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 13:36:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:19.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:36:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:19.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:36:19 np0005596060 nova_compute[247421]: 2026-01-26 18:36:19.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]: {
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:    "1": [
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:        {
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "devices": [
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "/dev/loop3"
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            ],
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "lv_name": "ceph_lv0",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "lv_size": "7511998464",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "name": "ceph_lv0",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "tags": {
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.cluster_name": "ceph",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.crush_device_class": "",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.encrypted": "0",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.osd_id": "1",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.type": "block",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:                "ceph.vdo": "0"
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            },
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "type": "block",
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:            "vg_name": "ceph_vg0"
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:        }
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]:    ]
Jan 26 13:36:20 np0005596060 zen_elgamal[293021]: }
Jan 26 13:36:20 np0005596060 systemd[1]: libpod-c69d2eb41aae9ffeca9986710f747e3b3ece8d099fcfbf3b41e4faded33472c1.scope: Deactivated successfully.
Jan 26 13:36:20 np0005596060 podman[293005]: 2026-01-26 18:36:20.046762146 +0000 UTC m=+0.892130784 container died c69d2eb41aae9ffeca9986710f747e3b3ece8d099fcfbf3b41e4faded33472c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 13:36:20 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:36:20.143 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:36:20 np0005596060 nova_compute[247421]: 2026-01-26 18:36:20.144 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:20 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:36:20.144 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:36:20 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3006b63bcb7fe9f8daefc5a33ee4850551c8b2dd284dcb2767c41552249187b0-merged.mount: Deactivated successfully.
Jan 26 13:36:20 np0005596060 podman[293005]: 2026-01-26 18:36:20.333757809 +0000 UTC m=+1.179126447 container remove c69d2eb41aae9ffeca9986710f747e3b3ece8d099fcfbf3b41e4faded33472c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_elgamal, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:36:20 np0005596060 systemd[1]: libpod-conmon-c69d2eb41aae9ffeca9986710f747e3b3ece8d099fcfbf3b41e4faded33472c1.scope: Deactivated successfully.
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.570044) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452580570146, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 2187, "num_deletes": 255, "total_data_size": 3880039, "memory_usage": 3942736, "flush_reason": "Manual Compaction"}
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452580590740, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 3773086, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38922, "largest_seqno": 41108, "table_properties": {"data_size": 3763247, "index_size": 6205, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 21034, "raw_average_key_size": 20, "raw_value_size": 3743310, "raw_average_value_size": 3706, "num_data_blocks": 270, "num_entries": 1010, "num_filter_entries": 1010, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769452362, "oldest_key_time": 1769452362, "file_creation_time": 1769452580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 20725 microseconds, and 8030 cpu microseconds.
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.590780) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 3773086 bytes OK
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.590802) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.594368) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.594385) EVENT_LOG_v1 {"time_micros": 1769452580594379, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.594405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 3871123, prev total WAL file size 3871123, number of live WAL files 2.
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.595551) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(3684KB)], [86(8664KB)]
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452580595655, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12645367, "oldest_snapshot_seqno": -1}
Jan 26 13:36:20 np0005596060 nova_compute[247421]: 2026-01-26 18:36:20.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:36:20 np0005596060 nova_compute[247421]: 2026-01-26 18:36:20.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6711 keys, 10681380 bytes, temperature: kUnknown
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452580663413, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10681380, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10636728, "index_size": 26748, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 171998, "raw_average_key_size": 25, "raw_value_size": 10516461, "raw_average_value_size": 1567, "num_data_blocks": 1071, "num_entries": 6711, "num_filter_entries": 6711, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769452580, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.663666) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10681380 bytes
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.665593) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.4 rd, 157.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.5 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 7248, records dropped: 537 output_compression: NoCompression
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.665609) EVENT_LOG_v1 {"time_micros": 1769452580665602, "job": 50, "event": "compaction_finished", "compaction_time_micros": 67843, "compaction_time_cpu_micros": 24069, "output_level": 6, "num_output_files": 1, "total_output_size": 10681380, "num_input_records": 7248, "num_output_records": 6711, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452580666497, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452580668766, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.595458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.668902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.668908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.668910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.668911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:20 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:20.668912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:20 np0005596060 podman[293233]: 2026-01-26 18:36:20.912898004 +0000 UTC m=+0.041339401 container create 037c13f907cd8c24d864f8169cfa7cd6101f9732f7b52a73663c6e02760619c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 13:36:20 np0005596060 systemd[1]: Started libpod-conmon-037c13f907cd8c24d864f8169cfa7cd6101f9732f7b52a73663c6e02760619c5.scope.
Jan 26 13:36:20 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:36:20 np0005596060 podman[293233]: 2026-01-26 18:36:20.986798734 +0000 UTC m=+0.115240151 container init 037c13f907cd8c24d864f8169cfa7cd6101f9732f7b52a73663c6e02760619c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:36:20 np0005596060 podman[293233]: 2026-01-26 18:36:20.895322582 +0000 UTC m=+0.023764059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:36:20 np0005596060 podman[293233]: 2026-01-26 18:36:20.998882678 +0000 UTC m=+0.127324085 container start 037c13f907cd8c24d864f8169cfa7cd6101f9732f7b52a73663c6e02760619c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:36:21 np0005596060 podman[293233]: 2026-01-26 18:36:21.003586797 +0000 UTC m=+0.132028194 container attach 037c13f907cd8c24d864f8169cfa7cd6101f9732f7b52a73663c6e02760619c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 13:36:21 np0005596060 objective_ganguly[293251]: 167 167
Jan 26 13:36:21 np0005596060 podman[293233]: 2026-01-26 18:36:21.007023803 +0000 UTC m=+0.135465190 container died 037c13f907cd8c24d864f8169cfa7cd6101f9732f7b52a73663c6e02760619c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:36:21 np0005596060 systemd[1]: libpod-037c13f907cd8c24d864f8169cfa7cd6101f9732f7b52a73663c6e02760619c5.scope: Deactivated successfully.
Jan 26 13:36:21 np0005596060 systemd[1]: var-lib-containers-storage-overlay-74c9a71b9da502205c351966de1c99090c26f9126f54a6532e2651a05e26391a-merged.mount: Deactivated successfully.
Jan 26 13:36:21 np0005596060 podman[293233]: 2026-01-26 18:36:21.047463681 +0000 UTC m=+0.175905068 container remove 037c13f907cd8c24d864f8169cfa7cd6101f9732f7b52a73663c6e02760619c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 26 13:36:21 np0005596060 podman[293247]: 2026-01-26 18:36:21.051214485 +0000 UTC m=+0.095213057 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 13:36:21 np0005596060 systemd[1]: libpod-conmon-037c13f907cd8c24d864f8169cfa7cd6101f9732f7b52a73663c6e02760619c5.scope: Deactivated successfully.
Jan 26 13:36:21 np0005596060 podman[293250]: 2026-01-26 18:36:21.065390492 +0000 UTC m=+0.103743472 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:36:21 np0005596060 podman[293314]: 2026-01-26 18:36:21.210069514 +0000 UTC m=+0.044942773 container create 55c08169a6a5fae8339650151b6bc2fe05d644ca37eed10d04fd1b66efe2b488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:36:21 np0005596060 systemd[1]: Started libpod-conmon-55c08169a6a5fae8339650151b6bc2fe05d644ca37eed10d04fd1b66efe2b488.scope.
Jan 26 13:36:21 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:36:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2047080c9ae49be889d1f15aa61e373a1b3c4c34dd20f06c133a7ef2a3e8bdde/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2047080c9ae49be889d1f15aa61e373a1b3c4c34dd20f06c133a7ef2a3e8bdde/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2047080c9ae49be889d1f15aa61e373a1b3c4c34dd20f06c133a7ef2a3e8bdde/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2047080c9ae49be889d1f15aa61e373a1b3c4c34dd20f06c133a7ef2a3e8bdde/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:36:21 np0005596060 podman[293314]: 2026-01-26 18:36:21.192346317 +0000 UTC m=+0.027219596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:36:21 np0005596060 podman[293314]: 2026-01-26 18:36:21.297277498 +0000 UTC m=+0.132150787 container init 55c08169a6a5fae8339650151b6bc2fe05d644ca37eed10d04fd1b66efe2b488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 13:36:21 np0005596060 podman[293314]: 2026-01-26 18:36:21.307108866 +0000 UTC m=+0.141982125 container start 55c08169a6a5fae8339650151b6bc2fe05d644ca37eed10d04fd1b66efe2b488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:36:21 np0005596060 podman[293314]: 2026-01-26 18:36:21.310805169 +0000 UTC m=+0.145678448 container attach 55c08169a6a5fae8339650151b6bc2fe05d644ca37eed10d04fd1b66efe2b488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 13:36:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:21.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:21.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:21 np0005596060 nova_compute[247421]: 2026-01-26 18:36:21.648 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:36:22 np0005596060 nifty_bohr[293330]: {
Jan 26 13:36:22 np0005596060 nifty_bohr[293330]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:36:22 np0005596060 nifty_bohr[293330]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:36:22 np0005596060 nifty_bohr[293330]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:36:22 np0005596060 nifty_bohr[293330]:        "osd_id": 1,
Jan 26 13:36:22 np0005596060 nifty_bohr[293330]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:36:22 np0005596060 nifty_bohr[293330]:        "type": "bluestore"
Jan 26 13:36:22 np0005596060 nifty_bohr[293330]:    }
Jan 26 13:36:22 np0005596060 nifty_bohr[293330]: }
Jan 26 13:36:22 np0005596060 systemd[1]: libpod-55c08169a6a5fae8339650151b6bc2fe05d644ca37eed10d04fd1b66efe2b488.scope: Deactivated successfully.
Jan 26 13:36:22 np0005596060 podman[293314]: 2026-01-26 18:36:22.167620132 +0000 UTC m=+1.002493381 container died 55c08169a6a5fae8339650151b6bc2fe05d644ca37eed10d04fd1b66efe2b488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:36:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2047080c9ae49be889d1f15aa61e373a1b3c4c34dd20f06c133a7ef2a3e8bdde-merged.mount: Deactivated successfully.
Jan 26 13:36:22 np0005596060 podman[293314]: 2026-01-26 18:36:22.22557212 +0000 UTC m=+1.060445379 container remove 55c08169a6a5fae8339650151b6bc2fe05d644ca37eed10d04fd1b66efe2b488 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:36:22 np0005596060 systemd[1]: libpod-conmon-55c08169a6a5fae8339650151b6bc2fe05d644ca37eed10d04fd1b66efe2b488.scope: Deactivated successfully.
Jan 26 13:36:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:36:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:36:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:36:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:36:22 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4bef7a3b-4805-44a0-9324-c3e01c4bad7a does not exist
Jan 26 13:36:22 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev a94df4dd-a726-4bef-9012-257c5dd8d7a1 does not exist
Jan 26 13:36:22 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5e2da9d8-e307-4efa-8f55-3bf0e29f27fd does not exist
Jan 26 13:36:22 np0005596060 nova_compute[247421]: 2026-01-26 18:36:22.382 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:22 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:36:22 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:36:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:23.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:36:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:23.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:36:23 np0005596060 nova_compute[247421]: 2026-01-26 18:36:23.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:36:23 np0005596060 nova_compute[247421]: 2026-01-26 18:36:23.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:36:23 np0005596060 nova_compute[247421]: 2026-01-26 18:36:23.835 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:24 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:36:24.146 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:36:24 np0005596060 nova_compute[247421]: 2026-01-26 18:36:24.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:36:24 np0005596060 nova_compute[247421]: 2026-01-26 18:36:24.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:36:24 np0005596060 nova_compute[247421]: 2026-01-26 18:36:24.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:36:24 np0005596060 nova_compute[247421]: 2026-01-26 18:36:24.668 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:36:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:25.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:25.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:25 np0005596060 nova_compute[247421]: 2026-01-26 18:36:25.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:36:25 np0005596060 nova_compute[247421]: 2026-01-26 18:36:25.695 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:36:25 np0005596060 nova_compute[247421]: 2026-01-26 18:36:25.696 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:36:25 np0005596060 nova_compute[247421]: 2026-01-26 18:36:25.696 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:36:25 np0005596060 nova_compute[247421]: 2026-01-26 18:36:25.696 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:36:25 np0005596060 nova_compute[247421]: 2026-01-26 18:36:25.696 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:36:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:36:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/543081489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.279 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.425 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.427 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4632MB free_disk=20.988269805908203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.427 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.428 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.487 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.487 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:36:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 no beacon from mds.-1.0 (gid: 24149 addr: [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] state: up:standby) since 15.0562
Jan 26 13:36:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.506 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:36:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:36:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2371449970' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.945 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.953 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.984 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.986 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:36:26 np0005596060 nova_compute[247421]: 2026-01-26 18:36:26.987 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:36:27 np0005596060 nova_compute[247421]: 2026-01-26 18:36:27.385 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:27.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:27.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:27 np0005596060 nova_compute[247421]: 2026-01-26 18:36:27.982 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:36:28 np0005596060 nova_compute[247421]: 2026-01-26 18:36:28.072 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:36:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:28 np0005596060 nova_compute[247421]: 2026-01-26 18:36:28.837 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:29.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:36:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:29.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:36:30 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:31 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 no beacon from mds.-1.0 (gid: 24149 addr: [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] state: up:standby) since 20.0578
Jan 26 13:36:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:31.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:31.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:32 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:32 np0005596060 nova_compute[247421]: 2026-01-26 18:36:32.385 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:33 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check failed: 1 slow ops, oldest one blocked for 31 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:36:33 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:33.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:36:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:33.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:36:33 np0005596060 nova_compute[247421]: 2026-01-26 18:36:33.840 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:34 np0005596060 ceph-mon[74267]: Health check failed: 1 slow ops, oldest one blocked for 31 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:36:34 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:35 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:35.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:35.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:36 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 no beacon from mds.-1.0 (gid: 24149 addr: [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] state: up:standby) since 25.0588
Jan 26 13:36:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:37 np0005596060 nova_compute[247421]: 2026-01-26 18:36:37.428 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:37 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:37.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:37.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:38 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:38 np0005596060 nova_compute[247421]: 2026-01-26 18:36:38.843 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:39.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:39 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:36:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:39.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:36:40 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 37 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 no beacon from mds.-1.0 (gid: 24149 addr: [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] state: up:standby) since 30.0597
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:41.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.547754) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452601547828, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 474, "num_deletes": 258, "total_data_size": 401673, "memory_usage": 411688, "flush_reason": "Manual Compaction"}
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452601552167, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 397404, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41109, "largest_seqno": 41582, "table_properties": {"data_size": 394756, "index_size": 684, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6583, "raw_average_key_size": 18, "raw_value_size": 389257, "raw_average_value_size": 1099, "num_data_blocks": 29, "num_entries": 354, "num_filter_entries": 354, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769452581, "oldest_key_time": 1769452581, "file_creation_time": 1769452601, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 4472 microseconds, and 1832 cpu microseconds.
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.552237) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 397404 bytes OK
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.552288) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.554617) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.554634) EVENT_LOG_v1 {"time_micros": 1769452601554628, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.554652) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 398829, prev total WAL file size 496786, number of live WAL files 2.
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.555213) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323535' seq:72057594037927935, type:22 .. '6C6F676D0031353039' seq:0, type:0; will stop at (end)
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(388KB)], [89(10MB)]
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452601555247, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11078784, "oldest_snapshot_seqno": -1}
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6538 keys, 10960050 bytes, temperature: kUnknown
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452601633669, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10960050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10915743, "index_size": 26851, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 169409, "raw_average_key_size": 25, "raw_value_size": 10797594, "raw_average_value_size": 1651, "num_data_blocks": 1073, "num_entries": 6538, "num_filter_entries": 6538, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769452601, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.633917) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10960050 bytes
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.635098) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.1 rd, 139.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.2 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(55.5) write-amplify(27.6) OK, records in: 7065, records dropped: 527 output_compression: NoCompression
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.635116) EVENT_LOG_v1 {"time_micros": 1769452601635107, "job": 52, "event": "compaction_finished", "compaction_time_micros": 78514, "compaction_time_cpu_micros": 25053, "output_level": 6, "num_output_files": 1, "total_output_size": 10960050, "num_input_records": 7065, "num_output_records": 6538, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452601635315, "job": 52, "event": "table_file_deletion", "file_number": 91}
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452601637412, "job": 52, "event": "table_file_deletion", "file_number": 89}
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.555079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.637442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.637447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.637449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.637450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:36:41.637452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:36:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:41.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.mbryrf(active, since 56m), standbys: compute-2.cchxrf
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:41 np0005596060 ceph-mon[74267]: Health check update: 1 slow ops, oldest one blocked for 37 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:36:42 np0005596060 nova_compute[247421]: 2026-01-26 18:36:42.430 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:42 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:42 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:43.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:36:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:43.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:36:43 np0005596060 nova_compute[247421]: 2026-01-26 18:36:43.845 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:43 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:43 np0005596060 ceph-mgr[74563]: ms_deliver_dispatch: unhandled message 0x5642f6284000 mgrreport(mgr.compute-1.qpyzhk +0-0 packed 54) v9 from mgr.24104 192.168.122.101:0/301613812
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:36:44
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'volumes', 'backups', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:44 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:36:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:36:45 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:45.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:45.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:45 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:46 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:46 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 41 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:36:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 no beacon from mds.-1.0 (gid: 24149 addr: [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] state: up:standby) since 35.0642
Jan 26 13:36:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:46 np0005596060 ceph-mon[74267]: Health check update: 1 slow ops, oldest one blocked for 41 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:36:46 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:47 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:47 np0005596060 nova_compute[247421]: 2026-01-26 18:36:47.431 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:36:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:47.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:36:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:47.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:48 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:48 np0005596060 nova_compute[247421]: 2026-01-26 18:36:48.848 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:48 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:49 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:49.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:49.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:49 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:50 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:50 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:51 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:51 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 46 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:36:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 no beacon from mds.-1.0 (gid: 24149 addr: [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] state: up:standby) since 40.0654
Jan 26 13:36:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:36:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:51.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:36:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:51.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:51 np0005596060 podman[293525]: 2026-01-26 18:36:51.795342726 +0000 UTC m=+0.049207359 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 13:36:51 np0005596060 podman[293526]: 2026-01-26 18:36:51.859134962 +0000 UTC m=+0.113000705 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 13:36:51 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:51 np0005596060 ceph-mon[74267]: Health check update: 1 slow ops, oldest one blocked for 46 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:36:52 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:52 np0005596060 nova_compute[247421]: 2026-01-26 18:36:52.433 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:52 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:53 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:53.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:53.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:53 np0005596060 nova_compute[247421]: 2026-01-26 18:36:53.851 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:53 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:54 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:54 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:55 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:55.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:36:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:55.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:36:56 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:56 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:56 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 51 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:36:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 no beacon from mds.-1.0 (gid: 24149 addr: [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] state: up:standby) since 45.0664
Jan 26 13:36:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:36:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:57 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:57 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:57 np0005596060 ceph-mon[74267]: Health check update: 1 slow ops, oldest one blocked for 51 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:36:57 np0005596060 nova_compute[247421]: 2026-01-26 18:36:57.435 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:57.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:57.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:58 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:58 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:36:58 np0005596060 nova_compute[247421]: 2026-01-26 18:36:58.853 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:36:59 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:36:59 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:36:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:36:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:36:59.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:36:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:36:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:36:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:36:59.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:37:00 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:00 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:37:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:37:01 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:01 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:37:01 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 56 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:37:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 no beacon from mds.-1.0 (gid: 24149 addr: [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] state: up:standby) since 50.0675
Jan 26 13:37:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:01.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:01.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:02 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:02 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:37:02 np0005596060 ceph-mon[74267]: Health check update: 1 slow ops, oldest one blocked for 56 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:37:02 np0005596060 nova_compute[247421]: 2026-01-26 18:37:02.438 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 64 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 1012 KiB/s wr, 4 op/s
Jan 26 13:37:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:37:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4041157041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:03 np0005596060 ceph-mon[74267]: 1 slow requests (by type [ 'started' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 26 13:37:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:03.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:03.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:03 np0005596060 nova_compute[247421]: 2026-01-26 18:37:03.856 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005763409059184813 of space, bias 1.0, pg target 0.1729022717755444 quantized to 32 (current 32)
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:37:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:37:04 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 75 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 1.4 MiB/s wr, 4 op/s
Jan 26 13:37:05 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 26 13:37:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:05.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 26 13:37:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 26 13:37:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:05.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 26 13:37:06 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:06 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check update: 1 slow ops, oldest one blocked for 61 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:37:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).mds e10 no beacon from mds.-1.0 (gid: 24149 addr: [v2:192.168.122.101:6804/245886810,v1:192.168.122.101:6805/245886810] state: up:standby) since 55.0685
Jan 26 13:37:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 84 MiB data, 363 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 21 op/s
Jan 26 13:37:07 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:07 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 61 sec, osd.0 has slow ops)
Jan 26 13:37:07 np0005596060 ceph-mon[74267]: Health check update: 1 slow ops, oldest one blocked for 61 sec, osd.0 has slow ops (SLOW_OPS)
Jan 26 13:37:07 np0005596060 nova_compute[247421]: 2026-01-26 18:37:07.439 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:07.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:37:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:07.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:37:08 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 61 sec, osd.0 has slow ops)
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: paxos.0).electionLogic(23) init, last seen epoch 23, mid-election, bumping
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 26 13:37:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 340 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.oqvedy=up:active} 2 up:standby
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.mbryrf(active, since 57m), standbys: compute-2.cchxrf
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Jan 26 13:37:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 26 13:37:08 np0005596060 nova_compute[247421]: 2026-01-26 18:37:08.865 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:09 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:09 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.qpyzhk started
Jan 26 13:37:09 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check failed: 9 slow ops, oldest one blocked for 62 sec, mon.compute-1 has slow ops (SLOW_OPS)
Jan 26 13:37:09 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 13:37:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:09.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:09.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:10 np0005596060 ceph-mgr[74563]: mgr.server handle_open ignoring open from mgr.compute-1.qpyzhk 192.168.122.101:0/301613812; not ready for session (expect reconnect)
Jan 26 13:37:10 np0005596060 ceph-mon[74267]: mon.compute-0 calling monitor election
Jan 26 13:37:10 np0005596060 ceph-mon[74267]: mon.compute-2 calling monitor election
Jan 26 13:37:10 np0005596060 ceph-mon[74267]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 26 13:37:10 np0005596060 ceph-mon[74267]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Jan 26 13:37:10 np0005596060 ceph-mon[74267]: Cluster is now healthy
Jan 26 13:37:10 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.mbryrf(active, since 57m), standbys: compute-2.cchxrf, compute-1.qpyzhk
Jan 26 13:37:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.qpyzhk", "id": "compute-1.qpyzhk"} v 0) v1
Jan 26 13:37:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "mgr metadata", "who": "compute-1.qpyzhk", "id": "compute-1.qpyzhk"}]: dispatch
Jan 26 13:37:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 340 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 26 13:37:11 np0005596060 ceph-mon[74267]: Health check failed: 9 slow ops, oldest one blocked for 62 sec, mon.compute-1 has slow ops (SLOW_OPS)
Jan 26 13:37:11 np0005596060 ceph-mon[74267]: overall HEALTH_OK
Jan 26 13:37:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:11.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:11.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:12 np0005596060 nova_compute[247421]: 2026-01-26 18:37:12.440 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 26 13:37:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:13.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:13 np0005596060 nova_compute[247421]: 2026-01-26 18:37:13.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:13.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:13 np0005596060 nova_compute[247421]: 2026-01-26 18:37:13.867 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:37:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:37:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:37:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:37:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:37:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:37:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 815 KiB/s wr, 95 op/s
Jan 26 13:37:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:37:14.764 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:37:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:37:14.764 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:37:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:37:14.764 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:37:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:15.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:37:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:15.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:37:16 np0005596060 ceph-mon[74267]: log_channel(cluster) log [WRN] : Health check update: 10 slow ops, oldest one blocked for 67 sec, mon.compute-1 has slow ops (SLOW_OPS)
Jan 26 13:37:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 359 KiB/s wr, 95 op/s
Jan 26 13:37:17 np0005596060 ceph-mon[74267]: Health check update: 10 slow ops, oldest one blocked for 67 sec, mon.compute-1 has slow ops (SLOW_OPS)
Jan 26 13:37:17 np0005596060 nova_compute[247421]: 2026-01-26 18:37:17.441 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:17.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:17.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 56 KiB/s wr, 78 op/s
Jan 26 13:37:18 np0005596060 nova_compute[247421]: 2026-01-26 18:37:18.870 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:19 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 10 slow ops, oldest one blocked for 67 sec, mon.compute-1 has slow ops)
Jan 26 13:37:19 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 26 13:37:19 np0005596060 ceph-mon[74267]: Health check cleared: SLOW_OPS (was: 10 slow ops, oldest one blocked for 67 sec, mon.compute-1 has slow ops)
Jan 26 13:37:19 np0005596060 ceph-mon[74267]: Cluster is now healthy
Jan 26 13:37:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:19.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:19.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 52 op/s
Jan 26 13:37:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:21.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:21 np0005596060 nova_compute[247421]: 2026-01-26 18:37:21.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:21 np0005596060 nova_compute[247421]: 2026-01-26 18:37:21.649 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:21 np0005596060 nova_compute[247421]: 2026-01-26 18:37:21.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:21 np0005596060 nova_compute[247421]: 2026-01-26 18:37:21.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:37:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:21.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:22 np0005596060 nova_compute[247421]: 2026-01-26 18:37:22.444 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 118 MiB data, 403 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Jan 26 13:37:22 np0005596060 podman[293694]: 2026-01-26 18:37:22.801500185 +0000 UTC m=+0.054958214 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 26 13:37:22 np0005596060 podman[293710]: 2026-01-26 18:37:22.833154111 +0000 UTC m=+0.081004209 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:37:23 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5b17690f-e2c1-483b-913d-6eb27de9c54d does not exist
Jan 26 13:37:23 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev a7ef0a79-022f-43a8-889c-06fca5262834 does not exist
Jan 26 13:37:23 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev c9ba8c49-836f-4430-9df2-0728b8d1801a does not exist
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:37:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:23.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:37:23 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:37:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:23.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:23 np0005596060 nova_compute[247421]: 2026-01-26 18:37:23.872 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:24 np0005596060 podman[294000]: 2026-01-26 18:37:24.113915864 +0000 UTC m=+0.038782207 container create 0888eba638640e5631a3a7806438720be87d1594050b637e6b62124d26cb2d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:37:24 np0005596060 systemd[1]: Started libpod-conmon-0888eba638640e5631a3a7806438720be87d1594050b637e6b62124d26cb2d63.scope.
Jan 26 13:37:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:37:24 np0005596060 podman[294000]: 2026-01-26 18:37:24.097893291 +0000 UTC m=+0.022759644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:37:24 np0005596060 podman[294000]: 2026-01-26 18:37:24.196866672 +0000 UTC m=+0.121733035 container init 0888eba638640e5631a3a7806438720be87d1594050b637e6b62124d26cb2d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 13:37:24 np0005596060 podman[294000]: 2026-01-26 18:37:24.20314534 +0000 UTC m=+0.128011683 container start 0888eba638640e5631a3a7806438720be87d1594050b637e6b62124d26cb2d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_driscoll, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 13:37:24 np0005596060 podman[294000]: 2026-01-26 18:37:24.206235238 +0000 UTC m=+0.131101611 container attach 0888eba638640e5631a3a7806438720be87d1594050b637e6b62124d26cb2d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_driscoll, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:37:24 np0005596060 sweet_driscoll[294016]: 167 167
Jan 26 13:37:24 np0005596060 systemd[1]: libpod-0888eba638640e5631a3a7806438720be87d1594050b637e6b62124d26cb2d63.scope: Deactivated successfully.
Jan 26 13:37:24 np0005596060 conmon[294016]: conmon 0888eba638640e5631a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0888eba638640e5631a3a7806438720be87d1594050b637e6b62124d26cb2d63.scope/container/memory.events
Jan 26 13:37:24 np0005596060 podman[294000]: 2026-01-26 18:37:24.211799038 +0000 UTC m=+0.136665391 container died 0888eba638640e5631a3a7806438720be87d1594050b637e6b62124d26cb2d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_driscoll, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 13:37:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ce1245edfe61019ff7c9f29a7ac952f7d168c614cc0ce44c2345d73dafddc0d7-merged.mount: Deactivated successfully.
Jan 26 13:37:24 np0005596060 podman[294000]: 2026-01-26 18:37:24.250240805 +0000 UTC m=+0.175107158 container remove 0888eba638640e5631a3a7806438720be87d1594050b637e6b62124d26cb2d63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_driscoll, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:37:24 np0005596060 systemd[1]: libpod-conmon-0888eba638640e5631a3a7806438720be87d1594050b637e6b62124d26cb2d63.scope: Deactivated successfully.
Jan 26 13:37:24 np0005596060 podman[294040]: 2026-01-26 18:37:24.394141107 +0000 UTC m=+0.037582457 container create 1b02f8ae8945112fb35004c1021a5156ec370a1fc525915dd6762aa44d798087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 13:37:24 np0005596060 systemd[1]: Started libpod-conmon-1b02f8ae8945112fb35004c1021a5156ec370a1fc525915dd6762aa44d798087.scope.
Jan 26 13:37:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:37:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82e1c1c9189fa0cd9e8569e85edc6729f8fa376153b3a3e2d08ffb2fc073c4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82e1c1c9189fa0cd9e8569e85edc6729f8fa376153b3a3e2d08ffb2fc073c4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82e1c1c9189fa0cd9e8569e85edc6729f8fa376153b3a3e2d08ffb2fc073c4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82e1c1c9189fa0cd9e8569e85edc6729f8fa376153b3a3e2d08ffb2fc073c4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82e1c1c9189fa0cd9e8569e85edc6729f8fa376153b3a3e2d08ffb2fc073c4b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:24 np0005596060 podman[294040]: 2026-01-26 18:37:24.468470458 +0000 UTC m=+0.111911818 container init 1b02f8ae8945112fb35004c1021a5156ec370a1fc525915dd6762aa44d798087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:37:24 np0005596060 podman[294040]: 2026-01-26 18:37:24.377480058 +0000 UTC m=+0.020921428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:37:24 np0005596060 podman[294040]: 2026-01-26 18:37:24.475226208 +0000 UTC m=+0.118667568 container start 1b02f8ae8945112fb35004c1021a5156ec370a1fc525915dd6762aa44d798087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:37:24 np0005596060 podman[294040]: 2026-01-26 18:37:24.478323416 +0000 UTC m=+0.121764796 container attach 1b02f8ae8945112fb35004c1021a5156ec370a1fc525915dd6762aa44d798087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:37:24 np0005596060 nova_compute[247421]: 2026-01-26 18:37:24.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:24 np0005596060 nova_compute[247421]: 2026-01-26 18:37:24.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:37:24 np0005596060 nova_compute[247421]: 2026-01-26 18:37:24.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:37:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 564 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 26 13:37:25 np0005596060 interesting_taussig[294057]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:37:25 np0005596060 interesting_taussig[294057]: --> relative data size: 1.0
Jan 26 13:37:25 np0005596060 interesting_taussig[294057]: --> All data devices are unavailable
Jan 26 13:37:25 np0005596060 systemd[1]: libpod-1b02f8ae8945112fb35004c1021a5156ec370a1fc525915dd6762aa44d798087.scope: Deactivated successfully.
Jan 26 13:37:25 np0005596060 podman[294040]: 2026-01-26 18:37:25.298455176 +0000 UTC m=+0.941896536 container died 1b02f8ae8945112fb35004c1021a5156ec370a1fc525915dd6762aa44d798087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 26 13:37:25 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e82e1c1c9189fa0cd9e8569e85edc6729f8fa376153b3a3e2d08ffb2fc073c4b-merged.mount: Deactivated successfully.
Jan 26 13:37:25 np0005596060 podman[294040]: 2026-01-26 18:37:25.352301102 +0000 UTC m=+0.995742462 container remove 1b02f8ae8945112fb35004c1021a5156ec370a1fc525915dd6762aa44d798087 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_taussig, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:37:25 np0005596060 systemd[1]: libpod-conmon-1b02f8ae8945112fb35004c1021a5156ec370a1fc525915dd6762aa44d798087.scope: Deactivated successfully.
Jan 26 13:37:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:25.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:25.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:25 np0005596060 podman[294223]: 2026-01-26 18:37:25.961113214 +0000 UTC m=+0.044822909 container create 1bdd746b9a1a426828a77d59c481dd87f2afca715ed4bcc4de236c0f11f188b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:37:25 np0005596060 systemd[1]: Started libpod-conmon-1bdd746b9a1a426828a77d59c481dd87f2afca715ed4bcc4de236c0f11f188b0.scope.
Jan 26 13:37:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:37:26 np0005596060 podman[294223]: 2026-01-26 18:37:25.941693305 +0000 UTC m=+0.025403020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:37:26 np0005596060 podman[294223]: 2026-01-26 18:37:26.047863857 +0000 UTC m=+0.131573582 container init 1bdd746b9a1a426828a77d59c481dd87f2afca715ed4bcc4de236c0f11f188b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:37:26 np0005596060 podman[294223]: 2026-01-26 18:37:26.058898115 +0000 UTC m=+0.142607810 container start 1bdd746b9a1a426828a77d59c481dd87f2afca715ed4bcc4de236c0f11f188b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:37:26 np0005596060 podman[294223]: 2026-01-26 18:37:26.062935297 +0000 UTC m=+0.146645032 container attach 1bdd746b9a1a426828a77d59c481dd87f2afca715ed4bcc4de236c0f11f188b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:37:26 np0005596060 nervous_wescoff[294239]: 167 167
Jan 26 13:37:26 np0005596060 systemd[1]: libpod-1bdd746b9a1a426828a77d59c481dd87f2afca715ed4bcc4de236c0f11f188b0.scope: Deactivated successfully.
Jan 26 13:37:26 np0005596060 podman[294223]: 2026-01-26 18:37:26.06783859 +0000 UTC m=+0.151548285 container died 1bdd746b9a1a426828a77d59c481dd87f2afca715ed4bcc4de236c0f11f188b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:37:26 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c622932f804893dbab8b10e13ea6892ece10933d1eb2651cd9ecfd362ec8f494-merged.mount: Deactivated successfully.
Jan 26 13:37:26 np0005596060 podman[294223]: 2026-01-26 18:37:26.104920513 +0000 UTC m=+0.188630228 container remove 1bdd746b9a1a426828a77d59c481dd87f2afca715ed4bcc4de236c0f11f188b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wescoff, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 13:37:26 np0005596060 systemd[1]: libpod-conmon-1bdd746b9a1a426828a77d59c481dd87f2afca715ed4bcc4de236c0f11f188b0.scope: Deactivated successfully.
Jan 26 13:37:26 np0005596060 nova_compute[247421]: 2026-01-26 18:37:26.153 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:37:26 np0005596060 nova_compute[247421]: 2026-01-26 18:37:26.154 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:26 np0005596060 nova_compute[247421]: 2026-01-26 18:37:26.154 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:26 np0005596060 podman[294266]: 2026-01-26 18:37:26.293234653 +0000 UTC m=+0.043493466 container create 18e1ca7ec5fabf9a2d494fc9cf612860226ff5341d8d0444ffde6eaaf635cc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:37:26 np0005596060 systemd[1]: Started libpod-conmon-18e1ca7ec5fabf9a2d494fc9cf612860226ff5341d8d0444ffde6eaaf635cc07.scope.
Jan 26 13:37:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:37:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23196a4bd1c4af81655d936ad34cf82f0d268be27ef9a978ec2efb90746b1849/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23196a4bd1c4af81655d936ad34cf82f0d268be27ef9a978ec2efb90746b1849/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23196a4bd1c4af81655d936ad34cf82f0d268be27ef9a978ec2efb90746b1849/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23196a4bd1c4af81655d936ad34cf82f0d268be27ef9a978ec2efb90746b1849/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:26 np0005596060 podman[294266]: 2026-01-26 18:37:26.365130662 +0000 UTC m=+0.115389485 container init 18e1ca7ec5fabf9a2d494fc9cf612860226ff5341d8d0444ffde6eaaf635cc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:37:26 np0005596060 podman[294266]: 2026-01-26 18:37:26.275113607 +0000 UTC m=+0.025372430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:37:26 np0005596060 podman[294266]: 2026-01-26 18:37:26.371439741 +0000 UTC m=+0.121698544 container start 18e1ca7ec5fabf9a2d494fc9cf612860226ff5341d8d0444ffde6eaaf635cc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:37:26 np0005596060 podman[294266]: 2026-01-26 18:37:26.374459427 +0000 UTC m=+0.124718230 container attach 18e1ca7ec5fabf9a2d494fc9cf612860226ff5341d8d0444ffde6eaaf635cc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 13:37:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 13:37:27 np0005596060 lucid_germain[294283]: {
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:    "1": [
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:        {
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "devices": [
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "/dev/loop3"
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            ],
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "lv_name": "ceph_lv0",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "lv_size": "7511998464",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "name": "ceph_lv0",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "tags": {
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.cluster_name": "ceph",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.crush_device_class": "",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.encrypted": "0",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.osd_id": "1",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.type": "block",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:                "ceph.vdo": "0"
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            },
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "type": "block",
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:            "vg_name": "ceph_vg0"
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:        }
Jan 26 13:37:27 np0005596060 lucid_germain[294283]:    ]
Jan 26 13:37:27 np0005596060 lucid_germain[294283]: }
Jan 26 13:37:27 np0005596060 systemd[1]: libpod-18e1ca7ec5fabf9a2d494fc9cf612860226ff5341d8d0444ffde6eaaf635cc07.scope: Deactivated successfully.
Jan 26 13:37:27 np0005596060 podman[294266]: 2026-01-26 18:37:27.179352993 +0000 UTC m=+0.929611796 container died 18e1ca7ec5fabf9a2d494fc9cf612860226ff5341d8d0444ffde6eaaf635cc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:37:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-23196a4bd1c4af81655d936ad34cf82f0d268be27ef9a978ec2efb90746b1849-merged.mount: Deactivated successfully.
Jan 26 13:37:27 np0005596060 podman[294266]: 2026-01-26 18:37:27.235986019 +0000 UTC m=+0.986244812 container remove 18e1ca7ec5fabf9a2d494fc9cf612860226ff5341d8d0444ffde6eaaf635cc07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:37:27 np0005596060 systemd[1]: libpod-conmon-18e1ca7ec5fabf9a2d494fc9cf612860226ff5341d8d0444ffde6eaaf635cc07.scope: Deactivated successfully.
Jan 26 13:37:27 np0005596060 nova_compute[247421]: 2026-01-26 18:37:27.445 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:37:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:27.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:37:27 np0005596060 nova_compute[247421]: 2026-01-26 18:37:27.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:27 np0005596060 nova_compute[247421]: 2026-01-26 18:37:27.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:27 np0005596060 nova_compute[247421]: 2026-01-26 18:37:27.690 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:37:27 np0005596060 nova_compute[247421]: 2026-01-26 18:37:27.690 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:37:27 np0005596060 nova_compute[247421]: 2026-01-26 18:37:27.690 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:37:27 np0005596060 nova_compute[247421]: 2026-01-26 18:37:27.690 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:37:27 np0005596060 nova_compute[247421]: 2026-01-26 18:37:27.691 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:37:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:27.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:27 np0005596060 podman[294444]: 2026-01-26 18:37:27.834110292 +0000 UTC m=+0.042166542 container create 6ff31d46df1991718e8283efbbea9c7cff61801937c61a6f60538b76fa0fddd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 13:37:27 np0005596060 systemd[1]: Started libpod-conmon-6ff31d46df1991718e8283efbbea9c7cff61801937c61a6f60538b76fa0fddd0.scope.
Jan 26 13:37:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:37:27 np0005596060 podman[294444]: 2026-01-26 18:37:27.813869283 +0000 UTC m=+0.021925563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:37:27 np0005596060 podman[294444]: 2026-01-26 18:37:27.912953496 +0000 UTC m=+0.121009776 container init 6ff31d46df1991718e8283efbbea9c7cff61801937c61a6f60538b76fa0fddd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_visvesvaraya, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:37:27 np0005596060 podman[294444]: 2026-01-26 18:37:27.922359083 +0000 UTC m=+0.130415333 container start 6ff31d46df1991718e8283efbbea9c7cff61801937c61a6f60538b76fa0fddd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_visvesvaraya, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:37:27 np0005596060 podman[294444]: 2026-01-26 18:37:27.925500872 +0000 UTC m=+0.133557142 container attach 6ff31d46df1991718e8283efbbea9c7cff61801937c61a6f60538b76fa0fddd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:37:27 np0005596060 relaxed_visvesvaraya[294478]: 167 167
Jan 26 13:37:27 np0005596060 systemd[1]: libpod-6ff31d46df1991718e8283efbbea9c7cff61801937c61a6f60538b76fa0fddd0.scope: Deactivated successfully.
Jan 26 13:37:27 np0005596060 podman[294444]: 2026-01-26 18:37:27.928319923 +0000 UTC m=+0.136376173 container died 6ff31d46df1991718e8283efbbea9c7cff61801937c61a6f60538b76fa0fddd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 13:37:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-012d4dd8534cb36e0377030537b6407fc3c2963bf491ccfe22c3e1513baf3b0a-merged.mount: Deactivated successfully.
Jan 26 13:37:27 np0005596060 podman[294444]: 2026-01-26 18:37:27.982727623 +0000 UTC m=+0.190783873 container remove 6ff31d46df1991718e8283efbbea9c7cff61801937c61a6f60538b76fa0fddd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_visvesvaraya, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 26 13:37:27 np0005596060 systemd[1]: libpod-conmon-6ff31d46df1991718e8283efbbea9c7cff61801937c61a6f60538b76fa0fddd0.scope: Deactivated successfully.
Jan 26 13:37:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:37:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1138442790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:37:28 np0005596060 nova_compute[247421]: 2026-01-26 18:37:28.121 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:37:28 np0005596060 podman[294504]: 2026-01-26 18:37:28.155231174 +0000 UTC m=+0.044831769 container create 477b44b46bb65256d553850df3db44881186e9dbc514ff1a311f3129229344b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:37:28 np0005596060 systemd[1]: Started libpod-conmon-477b44b46bb65256d553850df3db44881186e9dbc514ff1a311f3129229344b8.scope.
Jan 26 13:37:28 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:37:28 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4a223d16d22d1eb2b1c87e5426baeca82d57208f69d5c8e517795694ad18cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:28 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4a223d16d22d1eb2b1c87e5426baeca82d57208f69d5c8e517795694ad18cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:28 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4a223d16d22d1eb2b1c87e5426baeca82d57208f69d5c8e517795694ad18cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:28 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc4a223d16d22d1eb2b1c87e5426baeca82d57208f69d5c8e517795694ad18cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:37:28 np0005596060 podman[294504]: 2026-01-26 18:37:28.136096652 +0000 UTC m=+0.025697267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:37:28 np0005596060 podman[294504]: 2026-01-26 18:37:28.23611342 +0000 UTC m=+0.125714045 container init 477b44b46bb65256d553850df3db44881186e9dbc514ff1a311f3129229344b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chatelet, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 13:37:28 np0005596060 podman[294504]: 2026-01-26 18:37:28.243345262 +0000 UTC m=+0.132945857 container start 477b44b46bb65256d553850df3db44881186e9dbc514ff1a311f3129229344b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chatelet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:37:28 np0005596060 podman[294504]: 2026-01-26 18:37:28.247832495 +0000 UTC m=+0.137433090 container attach 477b44b46bb65256d553850df3db44881186e9dbc514ff1a311f3129229344b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Jan 26 13:37:28 np0005596060 nova_compute[247421]: 2026-01-26 18:37:28.303 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:37:28 np0005596060 nova_compute[247421]: 2026-01-26 18:37:28.306 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4647MB free_disk=20.94287872314453GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:37:28 np0005596060 nova_compute[247421]: 2026-01-26 18:37:28.306 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:37:28 np0005596060 nova_compute[247421]: 2026-01-26 18:37:28.306 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:37:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:37:28 np0005596060 nova_compute[247421]: 2026-01-26 18:37:28.876 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:29 np0005596060 busy_chatelet[294523]: {
Jan 26 13:37:29 np0005596060 busy_chatelet[294523]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:37:29 np0005596060 busy_chatelet[294523]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:37:29 np0005596060 busy_chatelet[294523]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:37:29 np0005596060 busy_chatelet[294523]:        "osd_id": 1,
Jan 26 13:37:29 np0005596060 busy_chatelet[294523]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:37:29 np0005596060 busy_chatelet[294523]:        "type": "bluestore"
Jan 26 13:37:29 np0005596060 busy_chatelet[294523]:    }
Jan 26 13:37:29 np0005596060 busy_chatelet[294523]: }
Jan 26 13:37:29 np0005596060 systemd[1]: libpod-477b44b46bb65256d553850df3db44881186e9dbc514ff1a311f3129229344b8.scope: Deactivated successfully.
Jan 26 13:37:29 np0005596060 podman[294504]: 2026-01-26 18:37:29.089952359 +0000 UTC m=+0.979552954 container died 477b44b46bb65256d553850df3db44881186e9dbc514ff1a311f3129229344b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chatelet, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:37:29 np0005596060 systemd[1]: var-lib-containers-storage-overlay-dc4a223d16d22d1eb2b1c87e5426baeca82d57208f69d5c8e517795694ad18cb-merged.mount: Deactivated successfully.
Jan 26 13:37:29 np0005596060 podman[294504]: 2026-01-26 18:37:29.141504046 +0000 UTC m=+1.031104631 container remove 477b44b46bb65256d553850df3db44881186e9dbc514ff1a311f3129229344b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_chatelet, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:37:29 np0005596060 systemd[1]: libpod-conmon-477b44b46bb65256d553850df3db44881186e9dbc514ff1a311f3129229344b8.scope: Deactivated successfully.
Jan 26 13:37:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:37:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:37:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:37:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:37:29 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4184eed2-c1ec-4300-8531-f89461d2c3ec does not exist
Jan 26 13:37:29 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9b01a70a-2c16-4191-8cdb-92239dd2855b does not exist
Jan 26 13:37:29 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev f605ae2c-8e18-46c1-8a90-dea52d09c282 does not exist
Jan 26 13:37:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:29.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:29.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:30 np0005596060 nova_compute[247421]: 2026-01-26 18:37:30.006 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:37:30 np0005596060 nova_compute[247421]: 2026-01-26 18:37:30.006 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:37:30 np0005596060 nova_compute[247421]: 2026-01-26 18:37:30.020 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:37:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:37:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489329184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:37:30 np0005596060 nova_compute[247421]: 2026-01-26 18:37:30.451 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:37:30 np0005596060 nova_compute[247421]: 2026-01-26 18:37:30.459 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:37:30 np0005596060 nova_compute[247421]: 2026-01-26 18:37:30.488 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:37:30 np0005596060 nova_compute[247421]: 2026-01-26 18:37:30.490 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:37:30 np0005596060 nova_compute[247421]: 2026-01-26 18:37:30.490 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:37:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:37:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:37:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:37:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:37:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:31.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:37:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:31.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:32 np0005596060 nova_compute[247421]: 2026-01-26 18:37:32.447 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:37:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:33.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:33.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:33 np0005596060 nova_compute[247421]: 2026-01-26 18:37:33.877 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 303 KiB/s wr, 4 op/s
Jan 26 13:37:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:35.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:35.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:37:36.556 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:37:36 np0005596060 nova_compute[247421]: 2026-01-26 18:37:36.556 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:36 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:37:36.557 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:37:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 767 B/s rd, 25 KiB/s wr, 3 op/s
Jan 26 13:37:37 np0005596060 nova_compute[247421]: 2026-01-26 18:37:37.449 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:37.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:38 np0005596060 nova_compute[247421]: 2026-01-26 18:37:38.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:38 np0005596060 nova_compute[247421]: 2026-01-26 18:37:38.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 26 13:37:38 np0005596060 nova_compute[247421]: 2026-01-26 18:37:38.672 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 26 13:37:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Jan 26 13:37:38 np0005596060 nova_compute[247421]: 2026-01-26 18:37:38.878 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:37:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:39.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:37:39 np0005596060 nova_compute[247421]: 2026-01-26 18:37:39.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:37:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:39.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:37:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:37:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255794877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:37:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:37:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255794877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:37:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 1.2 KiB/s rd, 19 KiB/s wr, 3 op/s
Jan 26 13:37:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:41.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:41.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:42 np0005596060 nova_compute[247421]: 2026-01-26 18:37:42.491 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 19 KiB/s wr, 54 op/s
Jan 26 13:37:43 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:37:43.559 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:37:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:37:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:43.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:37:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:43.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:43 np0005596060 nova_compute[247421]: 2026-01-26 18:37:43.881 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:37:44
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'vms', '.rgw.root', 'images', '.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 72 op/s
Jan 26 13:37:44 np0005596060 nova_compute[247421]: 2026-01-26 18:37:44.767 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:37:44 np0005596060 nova_compute[247421]: 2026-01-26 18:37:44.768 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:37:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:37:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:45.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:45.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 72 op/s
Jan 26 13:37:47 np0005596060 nova_compute[247421]: 2026-01-26 18:37:47.493 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:47.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:47.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 6.3 KiB/s wr, 70 op/s
Jan 26 13:37:48 np0005596060 nova_compute[247421]: 2026-01-26 18:37:48.884 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:49.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:49.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 69 op/s
Jan 26 13:37:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:51.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:51.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:52 np0005596060 nova_compute[247421]: 2026-01-26 18:37:52.548 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 11 KiB/s wr, 109 op/s
Jan 26 13:37:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:53.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:53.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:53 np0005596060 podman[294689]: 2026-01-26 18:37:53.78975687 +0000 UTC m=+0.052787329 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:37:53 np0005596060 podman[294690]: 2026-01-26 18:37:53.821012277 +0000 UTC m=+0.084238521 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Jan 26 13:37:53 np0005596060 nova_compute[247421]: 2026-01-26 18:37:53.886 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 121 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 62 op/s
Jan 26 13:37:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:55.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:55.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:37:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 122 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 529 KiB/s rd, 14 KiB/s wr, 44 op/s
Jan 26 13:37:57 np0005596060 nova_compute[247421]: 2026-01-26 18:37:57.550 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:57.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:57.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 122 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 45 op/s
Jan 26 13:37:58 np0005596060 nova_compute[247421]: 2026-01-26 18:37:58.888 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:37:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:37:59.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:37:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:37:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:37:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:37:59.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 122 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 530 KiB/s rd, 22 KiB/s wr, 44 op/s
Jan 26 13:38:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:01.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:01.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:02 np0005596060 nova_compute[247421]: 2026-01-26 18:38:02.552 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 54 MiB data, 369 MiB used, 21 GiB / 21 GiB avail; 550 KiB/s rd, 23 KiB/s wr, 72 op/s
Jan 26 13:38:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:03.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:03.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:03 np0005596060 nova_compute[247421]: 2026-01-26 18:38:03.923 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0004151253954959985 of space, bias 1.0, pg target 0.12453761864879954 quantized to 32 (current 32)
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:38:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:38:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 63 KiB/s rd, 12 KiB/s wr, 32 op/s
Jan 26 13:38:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:05.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:05.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Jan 26 13:38:07 np0005596060 nova_compute[247421]: 2026-01-26 18:38:07.554 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:07.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:07.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 9.8 KiB/s wr, 28 op/s
Jan 26 13:38:08 np0005596060 nova_compute[247421]: 2026-01-26 18:38:08.925 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:09.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:09.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 13:38:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:11.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:11.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:12 np0005596060 nova_compute[247421]: 2026-01-26 18:38:12.650 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 13:38:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:13.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:13.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:13 np0005596060 nova_compute[247421]: 2026-01-26 18:38:13.928 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:38:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:38:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:38:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:38:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:38:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:38:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:14.765 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:38:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:14.766 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:38:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:14.766 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:38:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 0 B/s wr, 0 op/s
Jan 26 13:38:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:15.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:15 np0005596060 nova_compute[247421]: 2026-01-26 18:38:15.701 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:38:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:15.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:38:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:17.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:17 np0005596060 nova_compute[247421]: 2026-01-26 18:38:17.691 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:17.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:38:18 np0005596060 nova_compute[247421]: 2026-01-26 18:38:18.930 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:19.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:19.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:38:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:21.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:21 np0005596060 nova_compute[247421]: 2026-01-26 18:38:21.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:38:21 np0005596060 nova_compute[247421]: 2026-01-26 18:38:21.649 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:38:21 np0005596060 nova_compute[247421]: 2026-01-26 18:38:21.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:38:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:21.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:22 np0005596060 nova_compute[247421]: 2026-01-26 18:38:22.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:38:22 np0005596060 nova_compute[247421]: 2026-01-26 18:38:22.694 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:38:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:23.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:23.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:23 np0005596060 nova_compute[247421]: 2026-01-26 18:38:23.933 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:38:24 np0005596060 podman[294853]: 2026-01-26 18:38:24.812140973 +0000 UTC m=+0.065545520 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:38:24 np0005596060 podman[294854]: 2026-01-26 18:38:24.856987052 +0000 UTC m=+0.099175737 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:38:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:25.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:25 np0005596060 nova_compute[247421]: 2026-01-26 18:38:25.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:38:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:25.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:26 np0005596060 nova_compute[247421]: 2026-01-26 18:38:26.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:38:26 np0005596060 nova_compute[247421]: 2026-01-26 18:38:26.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:38:26 np0005596060 nova_compute[247421]: 2026-01-26 18:38:26.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:38:26 np0005596060 nova_compute[247421]: 2026-01-26 18:38:26.724 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:38:26 np0005596060 nova_compute[247421]: 2026-01-26 18:38:26.724 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:38:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:38:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:27.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:27 np0005596060 nova_compute[247421]: 2026-01-26 18:38:27.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:38:27 np0005596060 nova_compute[247421]: 2026-01-26 18:38:27.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:38:27 np0005596060 nova_compute[247421]: 2026-01-26 18:38:27.695 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:27.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:27 np0005596060 nova_compute[247421]: 2026-01-26 18:38:27.821 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:38:27 np0005596060 nova_compute[247421]: 2026-01-26 18:38:27.822 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:38:27 np0005596060 nova_compute[247421]: 2026-01-26 18:38:27.822 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:38:27 np0005596060 nova_compute[247421]: 2026-01-26 18:38:27.822 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:38:27 np0005596060 nova_compute[247421]: 2026-01-26 18:38:27.823 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:38:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:38:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/716702766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:38:28 np0005596060 nova_compute[247421]: 2026-01-26 18:38:28.256 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:38:28 np0005596060 nova_compute[247421]: 2026-01-26 18:38:28.407 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:38:28 np0005596060 nova_compute[247421]: 2026-01-26 18:38:28.409 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4680MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:38:28 np0005596060 nova_compute[247421]: 2026-01-26 18:38:28.409 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:38:28 np0005596060 nova_compute[247421]: 2026-01-26 18:38:28.410 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:38:28 np0005596060 nova_compute[247421]: 2026-01-26 18:38:28.548 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:38:28 np0005596060 nova_compute[247421]: 2026-01-26 18:38:28.549 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:38:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:38:28 np0005596060 nova_compute[247421]: 2026-01-26 18:38:28.935 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.051 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 766b2be2-d46f-4f27-ad07-a91017eaddaf has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.051 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.051 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.067 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing inventories for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.119 247428 DEBUG nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.140 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating ProviderTree inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.140 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.162 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing aggregate associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.190 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing trait associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.226 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.278 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:38:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:29.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:29 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:38:29 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2752546304' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.668 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.674 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:38:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:29.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.933 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.936 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.937 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.938 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.956 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:38:29 np0005596060 nova_compute[247421]: 2026-01-26 18:38:29.957 247428 INFO nova.compute.claims [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:38:30 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 95d4bfea-7f33-4526-ad54-64731219f321 does not exist
Jan 26 13:38:30 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 03897688-5f90-4025-9f5f-be3e16a9de5f does not exist
Jan 26 13:38:30 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 260a0844-413d-4fee-85d0-b75881b3ad04 does not exist
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:38:30 np0005596060 nova_compute[247421]: 2026-01-26 18:38:30.665 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:38:30 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:38:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 41 MiB data, 360 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:38:31 np0005596060 podman[295237]: 2026-01-26 18:38:31.090230678 +0000 UTC m=+0.046581964 container create 850b7c38e4221ee293ad36d0d195b714d85abc25e477c32e56d2e327404ce912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:38:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:38:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109168060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:38:31 np0005596060 systemd[1]: Started libpod-conmon-850b7c38e4221ee293ad36d0d195b714d85abc25e477c32e56d2e327404ce912.scope.
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.134 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.142 247428 DEBUG nova.compute.provider_tree [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:38:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:38:31 np0005596060 podman[295237]: 2026-01-26 18:38:31.069134537 +0000 UTC m=+0.025485833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.177 247428 DEBUG nova.scheduler.client.report [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:38:31 np0005596060 podman[295237]: 2026-01-26 18:38:31.180755746 +0000 UTC m=+0.137107052 container init 850b7c38e4221ee293ad36d0d195b714d85abc25e477c32e56d2e327404ce912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:38:31 np0005596060 podman[295237]: 2026-01-26 18:38:31.189080786 +0000 UTC m=+0.145432072 container start 850b7c38e4221ee293ad36d0d195b714d85abc25e477c32e56d2e327404ce912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 13:38:31 np0005596060 podman[295237]: 2026-01-26 18:38:31.192547953 +0000 UTC m=+0.148899259 container attach 850b7c38e4221ee293ad36d0d195b714d85abc25e477c32e56d2e327404ce912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 26 13:38:31 np0005596060 jolly_swirles[295255]: 167 167
Jan 26 13:38:31 np0005596060 systemd[1]: libpod-850b7c38e4221ee293ad36d0d195b714d85abc25e477c32e56d2e327404ce912.scope: Deactivated successfully.
Jan 26 13:38:31 np0005596060 podman[295237]: 2026-01-26 18:38:31.195087937 +0000 UTC m=+0.151439223 container died 850b7c38e4221ee293ad36d0d195b714d85abc25e477c32e56d2e327404ce912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 13:38:31 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7788a42cb339db00f4dec5092d8a8aa904a382bd2f1efad1b2fbbf81227170bb-merged.mount: Deactivated successfully.
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.226 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.227 247428 DEBUG nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:38:31 np0005596060 podman[295237]: 2026-01-26 18:38:31.233921044 +0000 UTC m=+0.190272330 container remove 850b7c38e4221ee293ad36d0d195b714d85abc25e477c32e56d2e327404ce912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:38:31 np0005596060 systemd[1]: libpod-conmon-850b7c38e4221ee293ad36d0d195b714d85abc25e477c32e56d2e327404ce912.scope: Deactivated successfully.
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.279 247428 DEBUG nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.279 247428 DEBUG nova.network.neutron [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.311 247428 INFO nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.345 247428 DEBUG nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:38:31 np0005596060 podman[295278]: 2026-01-26 18:38:31.405577394 +0000 UTC m=+0.048766578 container create 1f99064959ee838c994151b8105025b640cdf81abb714f85ac538037fa1e39e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:38:31 np0005596060 systemd[1]: Started libpod-conmon-1f99064959ee838c994151b8105025b640cdf81abb714f85ac538037fa1e39e6.scope.
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.451 247428 DEBUG nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.454 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.455 247428 INFO nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Creating image(s)#033[00m
Jan 26 13:38:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:38:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebecd73d752a933f3419dab94858c840c0d448d8a35a7663a1291ee11459028/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebecd73d752a933f3419dab94858c840c0d448d8a35a7663a1291ee11459028/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebecd73d752a933f3419dab94858c840c0d448d8a35a7663a1291ee11459028/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebecd73d752a933f3419dab94858c840c0d448d8a35a7663a1291ee11459028/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bebecd73d752a933f3419dab94858c840c0d448d8a35a7663a1291ee11459028/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:31 np0005596060 podman[295278]: 2026-01-26 18:38:31.384283928 +0000 UTC m=+0.027473132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:38:31 np0005596060 podman[295278]: 2026-01-26 18:38:31.491996839 +0000 UTC m=+0.135186073 container init 1f99064959ee838c994151b8105025b640cdf81abb714f85ac538037fa1e39e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 13:38:31 np0005596060 podman[295278]: 2026-01-26 18:38:31.498945824 +0000 UTC m=+0.142135018 container start 1f99064959ee838c994151b8105025b640cdf81abb714f85ac538037fa1e39e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.497 247428 DEBUG nova.storage.rbd_utils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:38:31 np0005596060 podman[295278]: 2026-01-26 18:38:31.502810172 +0000 UTC m=+0.145999356 container attach 1f99064959ee838c994151b8105025b640cdf81abb714f85ac538037fa1e39e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.524 247428 DEBUG nova.storage.rbd_utils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:38:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.549 247428 DEBUG nova.storage.rbd_utils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.553 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:38:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:31.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.636 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.637 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.638 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.638 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.664 247428 DEBUG nova.storage.rbd_utils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.668 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:38:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:38:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:31.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:38:31 np0005596060 nova_compute[247421]: 2026-01-26 18:38:31.954 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.286s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:38:32 np0005596060 nova_compute[247421]: 2026-01-26 18:38:32.019 247428 DEBUG nova.storage.rbd_utils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] resizing rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:38:32 np0005596060 nova_compute[247421]: 2026-01-26 18:38:32.113 247428 DEBUG nova.objects.instance [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'migration_context' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:38:32 np0005596060 nova_compute[247421]: 2026-01-26 18:38:32.215 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:38:32 np0005596060 nova_compute[247421]: 2026-01-26 18:38:32.215 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Ensure instance console log exists: /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:38:32 np0005596060 nova_compute[247421]: 2026-01-26 18:38:32.216 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:38:32 np0005596060 nova_compute[247421]: 2026-01-26 18:38:32.216 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:38:32 np0005596060 nova_compute[247421]: 2026-01-26 18:38:32.216 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:38:32 np0005596060 lucid_jackson[295294]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:38:32 np0005596060 lucid_jackson[295294]: --> relative data size: 1.0
Jan 26 13:38:32 np0005596060 lucid_jackson[295294]: --> All data devices are unavailable
Jan 26 13:38:32 np0005596060 systemd[1]: libpod-1f99064959ee838c994151b8105025b640cdf81abb714f85ac538037fa1e39e6.scope: Deactivated successfully.
Jan 26 13:38:32 np0005596060 podman[295278]: 2026-01-26 18:38:32.371373501 +0000 UTC m=+1.014562685 container died 1f99064959ee838c994151b8105025b640cdf81abb714f85ac538037fa1e39e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:38:32 np0005596060 nova_compute[247421]: 2026-01-26 18:38:32.400 247428 DEBUG nova.policy [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffa1cd7ba9e543f78f2ef48c2a7a67a2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '301bad5c2066428fa7f214024672bf92', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 26 13:38:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-bebecd73d752a933f3419dab94858c840c0d448d8a35a7663a1291ee11459028-merged.mount: Deactivated successfully.
Jan 26 13:38:32 np0005596060 podman[295278]: 2026-01-26 18:38:32.425873173 +0000 UTC m=+1.069062357 container remove 1f99064959ee838c994151b8105025b640cdf81abb714f85ac538037fa1e39e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 26 13:38:32 np0005596060 systemd[1]: libpod-conmon-1f99064959ee838c994151b8105025b640cdf81abb714f85ac538037fa1e39e6.scope: Deactivated successfully.
Jan 26 13:38:32 np0005596060 nova_compute[247421]: 2026-01-26 18:38:32.741 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 69 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 170 B/s rd, 1.2 MiB/s wr, 1 op/s
Jan 26 13:38:32 np0005596060 nova_compute[247421]: 2026-01-26 18:38:32.934 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:38:33 np0005596060 podman[295630]: 2026-01-26 18:38:33.081717139 +0000 UTC m=+0.036467689 container create 1e1deaae8c61ed8674a7fa9b522af41fcf11caa0747549a4613a31e59c9ab992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 13:38:33 np0005596060 systemd[1]: Started libpod-conmon-1e1deaae8c61ed8674a7fa9b522af41fcf11caa0747549a4613a31e59c9ab992.scope.
Jan 26 13:38:33 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:38:33 np0005596060 podman[295630]: 2026-01-26 18:38:33.15765154 +0000 UTC m=+0.112402110 container init 1e1deaae8c61ed8674a7fa9b522af41fcf11caa0747549a4613a31e59c9ab992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:38:33 np0005596060 podman[295630]: 2026-01-26 18:38:33.066294781 +0000 UTC m=+0.021045351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:38:33 np0005596060 podman[295630]: 2026-01-26 18:38:33.166056482 +0000 UTC m=+0.120807032 container start 1e1deaae8c61ed8674a7fa9b522af41fcf11caa0747549a4613a31e59c9ab992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:38:33 np0005596060 podman[295630]: 2026-01-26 18:38:33.170143405 +0000 UTC m=+0.124893975 container attach 1e1deaae8c61ed8674a7fa9b522af41fcf11caa0747549a4613a31e59c9ab992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:38:33 np0005596060 nice_allen[295647]: 167 167
Jan 26 13:38:33 np0005596060 systemd[1]: libpod-1e1deaae8c61ed8674a7fa9b522af41fcf11caa0747549a4613a31e59c9ab992.scope: Deactivated successfully.
Jan 26 13:38:33 np0005596060 podman[295630]: 2026-01-26 18:38:33.172072543 +0000 UTC m=+0.126823123 container died 1e1deaae8c61ed8674a7fa9b522af41fcf11caa0747549a4613a31e59c9ab992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:38:33 np0005596060 systemd[1]: var-lib-containers-storage-overlay-33154fdb0f78dc8d8d3d195ec7e6b95d7bd165a1187dc09d0a95ae8b36675e61-merged.mount: Deactivated successfully.
Jan 26 13:38:33 np0005596060 podman[295630]: 2026-01-26 18:38:33.208973592 +0000 UTC m=+0.163724132 container remove 1e1deaae8c61ed8674a7fa9b522af41fcf11caa0747549a4613a31e59c9ab992 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:38:33 np0005596060 systemd[1]: libpod-conmon-1e1deaae8c61ed8674a7fa9b522af41fcf11caa0747549a4613a31e59c9ab992.scope: Deactivated successfully.
Jan 26 13:38:33 np0005596060 podman[295671]: 2026-01-26 18:38:33.379780991 +0000 UTC m=+0.044676916 container create 181dd814fc17a5ac30fa292666f9b258a3052c02521decc7e2ea5baf9a080792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:38:33 np0005596060 systemd[1]: Started libpod-conmon-181dd814fc17a5ac30fa292666f9b258a3052c02521decc7e2ea5baf9a080792.scope.
Jan 26 13:38:33 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:38:33 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3e13711b4a543b139999edfda9b032bb0639cf383bb24a12760b9d843b3bd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:33 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3e13711b4a543b139999edfda9b032bb0639cf383bb24a12760b9d843b3bd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:33 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3e13711b4a543b139999edfda9b032bb0639cf383bb24a12760b9d843b3bd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:33 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d3e13711b4a543b139999edfda9b032bb0639cf383bb24a12760b9d843b3bd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:33 np0005596060 podman[295671]: 2026-01-26 18:38:33.361353957 +0000 UTC m=+0.026249902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:38:33 np0005596060 podman[295671]: 2026-01-26 18:38:33.461921608 +0000 UTC m=+0.126817563 container init 181dd814fc17a5ac30fa292666f9b258a3052c02521decc7e2ea5baf9a080792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 13:38:33 np0005596060 podman[295671]: 2026-01-26 18:38:33.468984246 +0000 UTC m=+0.133880171 container start 181dd814fc17a5ac30fa292666f9b258a3052c02521decc7e2ea5baf9a080792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:38:33 np0005596060 podman[295671]: 2026-01-26 18:38:33.472780091 +0000 UTC m=+0.137676046 container attach 181dd814fc17a5ac30fa292666f9b258a3052c02521decc7e2ea5baf9a080792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:38:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:33.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:33.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:33 np0005596060 nova_compute[247421]: 2026-01-26 18:38:33.937 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]: {
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:    "1": [
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:        {
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "devices": [
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "/dev/loop3"
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            ],
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "lv_name": "ceph_lv0",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "lv_size": "7511998464",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "name": "ceph_lv0",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "tags": {
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.cluster_name": "ceph",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.crush_device_class": "",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.encrypted": "0",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.osd_id": "1",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.type": "block",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:                "ceph.vdo": "0"
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            },
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "type": "block",
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:            "vg_name": "ceph_vg0"
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:        }
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]:    ]
Jan 26 13:38:34 np0005596060 trusting_noyce[295687]: }
Jan 26 13:38:34 np0005596060 systemd[1]: libpod-181dd814fc17a5ac30fa292666f9b258a3052c02521decc7e2ea5baf9a080792.scope: Deactivated successfully.
Jan 26 13:38:34 np0005596060 podman[295671]: 2026-01-26 18:38:34.341985197 +0000 UTC m=+1.006881132 container died 181dd814fc17a5ac30fa292666f9b258a3052c02521decc7e2ea5baf9a080792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 13:38:34 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0d3e13711b4a543b139999edfda9b032bb0639cf383bb24a12760b9d843b3bd0-merged.mount: Deactivated successfully.
Jan 26 13:38:34 np0005596060 podman[295671]: 2026-01-26 18:38:34.396799087 +0000 UTC m=+1.061695012 container remove 181dd814fc17a5ac30fa292666f9b258a3052c02521decc7e2ea5baf9a080792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:38:34 np0005596060 systemd[1]: libpod-conmon-181dd814fc17a5ac30fa292666f9b258a3052c02521decc7e2ea5baf9a080792.scope: Deactivated successfully.
Jan 26 13:38:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 84 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 7.0 KiB/s rd, 1.7 MiB/s wr, 15 op/s
Jan 26 13:38:34 np0005596060 podman[295850]: 2026-01-26 18:38:34.976819133 +0000 UTC m=+0.038815617 container create 430283d09914c8bc5d7f589c5aa306a14b4012fa3691b248613631c01c15f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_austin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:38:35 np0005596060 systemd[1]: Started libpod-conmon-430283d09914c8bc5d7f589c5aa306a14b4012fa3691b248613631c01c15f987.scope.
Jan 26 13:38:35 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:38:35 np0005596060 podman[295850]: 2026-01-26 18:38:35.053563225 +0000 UTC m=+0.115559729 container init 430283d09914c8bc5d7f589c5aa306a14b4012fa3691b248613631c01c15f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:38:35 np0005596060 podman[295850]: 2026-01-26 18:38:34.959787765 +0000 UTC m=+0.021784269 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:38:35 np0005596060 podman[295850]: 2026-01-26 18:38:35.062053239 +0000 UTC m=+0.124049723 container start 430283d09914c8bc5d7f589c5aa306a14b4012fa3691b248613631c01c15f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 13:38:35 np0005596060 podman[295850]: 2026-01-26 18:38:35.065486675 +0000 UTC m=+0.127483189 container attach 430283d09914c8bc5d7f589c5aa306a14b4012fa3691b248613631c01c15f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_austin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:38:35 np0005596060 wonderful_austin[295867]: 167 167
Jan 26 13:38:35 np0005596060 systemd[1]: libpod-430283d09914c8bc5d7f589c5aa306a14b4012fa3691b248613631c01c15f987.scope: Deactivated successfully.
Jan 26 13:38:35 np0005596060 podman[295850]: 2026-01-26 18:38:35.068585023 +0000 UTC m=+0.130581507 container died 430283d09914c8bc5d7f589c5aa306a14b4012fa3691b248613631c01c15f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_austin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:38:35 np0005596060 systemd[1]: var-lib-containers-storage-overlay-97a4ae0597317a4fd335513b78f159f6bd6d8c950d67fc38a067a2f05118c1f1-merged.mount: Deactivated successfully.
Jan 26 13:38:35 np0005596060 podman[295850]: 2026-01-26 18:38:35.105936823 +0000 UTC m=+0.167933307 container remove 430283d09914c8bc5d7f589c5aa306a14b4012fa3691b248613631c01c15f987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:38:35 np0005596060 systemd[1]: libpod-conmon-430283d09914c8bc5d7f589c5aa306a14b4012fa3691b248613631c01c15f987.scope: Deactivated successfully.
Jan 26 13:38:35 np0005596060 podman[295889]: 2026-01-26 18:38:35.267095279 +0000 UTC m=+0.041701360 container create fe7060a22afee5ddd4ef8d28bd1cfb6f92c740adca780401a3401f33991a1c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:38:35 np0005596060 systemd[1]: Started libpod-conmon-fe7060a22afee5ddd4ef8d28bd1cfb6f92c740adca780401a3401f33991a1c1f.scope.
Jan 26 13:38:35 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:38:35 np0005596060 podman[295889]: 2026-01-26 18:38:35.249835805 +0000 UTC m=+0.024441906 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:38:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a63e6665882a2700758494ec8b4463a76b403f8fd176370e1b0cc4aa753b733/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a63e6665882a2700758494ec8b4463a76b403f8fd176370e1b0cc4aa753b733/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a63e6665882a2700758494ec8b4463a76b403f8fd176370e1b0cc4aa753b733/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a63e6665882a2700758494ec8b4463a76b403f8fd176370e1b0cc4aa753b733/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:35 np0005596060 podman[295889]: 2026-01-26 18:38:35.361042534 +0000 UTC m=+0.135648645 container init fe7060a22afee5ddd4ef8d28bd1cfb6f92c740adca780401a3401f33991a1c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:38:35 np0005596060 podman[295889]: 2026-01-26 18:38:35.37043106 +0000 UTC m=+0.145037141 container start fe7060a22afee5ddd4ef8d28bd1cfb6f92c740adca780401a3401f33991a1c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:38:35 np0005596060 podman[295889]: 2026-01-26 18:38:35.374081582 +0000 UTC m=+0.148687693 container attach fe7060a22afee5ddd4ef8d28bd1cfb6f92c740adca780401a3401f33991a1c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:38:35 np0005596060 nova_compute[247421]: 2026-01-26 18:38:35.504 247428 DEBUG nova.network.neutron [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Successfully created port: 17131365-352f-497d-ae25-1813ee58134e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:38:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:38:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:35.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:38:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:35.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:36 np0005596060 stupefied_nobel[295906]: {
Jan 26 13:38:36 np0005596060 stupefied_nobel[295906]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:38:36 np0005596060 stupefied_nobel[295906]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:38:36 np0005596060 stupefied_nobel[295906]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:38:36 np0005596060 stupefied_nobel[295906]:        "osd_id": 1,
Jan 26 13:38:36 np0005596060 stupefied_nobel[295906]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:38:36 np0005596060 stupefied_nobel[295906]:        "type": "bluestore"
Jan 26 13:38:36 np0005596060 stupefied_nobel[295906]:    }
Jan 26 13:38:36 np0005596060 stupefied_nobel[295906]: }
Jan 26 13:38:36 np0005596060 systemd[1]: libpod-fe7060a22afee5ddd4ef8d28bd1cfb6f92c740adca780401a3401f33991a1c1f.scope: Deactivated successfully.
Jan 26 13:38:36 np0005596060 podman[295889]: 2026-01-26 18:38:36.214517294 +0000 UTC m=+0.989123375 container died fe7060a22afee5ddd4ef8d28bd1cfb6f92c740adca780401a3401f33991a1c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:38:36 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2a63e6665882a2700758494ec8b4463a76b403f8fd176370e1b0cc4aa753b733-merged.mount: Deactivated successfully.
Jan 26 13:38:36 np0005596060 podman[295889]: 2026-01-26 18:38:36.270347469 +0000 UTC m=+1.044953550 container remove fe7060a22afee5ddd4ef8d28bd1cfb6f92c740adca780401a3401f33991a1c1f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:38:36 np0005596060 systemd[1]: libpod-conmon-fe7060a22afee5ddd4ef8d28bd1cfb6f92c740adca780401a3401f33991a1c1f.scope: Deactivated successfully.
Jan 26 13:38:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:38:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:38:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:38:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:38:36 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 459a8142-3b3b-4819-8076-18409dabc60d does not exist
Jan 26 13:38:36 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 46b2bf2b-9821-4402-bcde-42f3c2f361ef does not exist
Jan 26 13:38:36 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5ad5bac3-c168-4e2a-a86f-e502b32a0148 does not exist
Jan 26 13:38:36 np0005596060 nova_compute[247421]: 2026-01-26 18:38:36.435 247428 DEBUG nova.network.neutron [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Successfully updated port: 17131365-352f-497d-ae25-1813ee58134e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:38:36 np0005596060 nova_compute[247421]: 2026-01-26 18:38:36.448 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:38:36 np0005596060 nova_compute[247421]: 2026-01-26 18:38:36.448 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquired lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:38:36 np0005596060 nova_compute[247421]: 2026-01-26 18:38:36.448 247428 DEBUG nova.network.neutron [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:38:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:36 np0005596060 nova_compute[247421]: 2026-01-26 18:38:36.536 247428 DEBUG nova.compute.manager [req-5f429c2b-e645-421e-bb3f-44212f98c293 req-a532379d-a664-4e20-98e7-37706af23655 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-changed-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:38:36 np0005596060 nova_compute[247421]: 2026-01-26 18:38:36.537 247428 DEBUG nova.compute.manager [req-5f429c2b-e645-421e-bb3f-44212f98c293 req-a532379d-a664-4e20-98e7-37706af23655 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Refreshing instance network info cache due to event network-changed-17131365-352f-497d-ae25-1813ee58134e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:38:36 np0005596060 nova_compute[247421]: 2026-01-26 18:38:36.537 247428 DEBUG oslo_concurrency.lockutils [req-5f429c2b-e645-421e-bb3f-44212f98c293 req-a532379d-a664-4e20-98e7-37706af23655 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:38:36 np0005596060 nova_compute[247421]: 2026-01-26 18:38:36.578 247428 DEBUG nova.network.neutron [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:38:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:38:37 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:38:37 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.465 247428 DEBUG nova.network.neutron [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Updating instance_info_cache with network_info: [{"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.495 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Releasing lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.495 247428 DEBUG nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Instance network_info: |[{"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.496 247428 DEBUG oslo_concurrency.lockutils [req-5f429c2b-e645-421e-bb3f-44212f98c293 req-a532379d-a664-4e20-98e7-37706af23655 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.496 247428 DEBUG nova.network.neutron [req-5f429c2b-e645-421e-bb3f-44212f98c293 req-a532379d-a664-4e20-98e7-37706af23655 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Refreshing network info cache for port 17131365-352f-497d-ae25-1813ee58134e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.499 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Start _get_guest_xml network_info=[{"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.502 247428 WARNING nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.508 247428 DEBUG nova.virt.libvirt.host [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.509 247428 DEBUG nova.virt.libvirt.host [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.511 247428 DEBUG nova.virt.libvirt.host [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.512 247428 DEBUG nova.virt.libvirt.host [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.513 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.513 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.513 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.514 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.514 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.514 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.514 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.514 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.515 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.515 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.515 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.515 247428 DEBUG nova.virt.hardware [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.518 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:38:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:37.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.743 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:37.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:38:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2056028315' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:38:37 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.963 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:37.999 247428 DEBUG nova.storage.rbd_utils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.005 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:38:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:38:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4234296097' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.444 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.446 247428 DEBUG nova.virt.libvirt.vif [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:38:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1438735807',display_name='tempest-TestNetworkAdvancedServerOps-server-1438735807',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1438735807',id=26,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAALh3ktKNgqM5WA8+0ghvMu82v6ROKkLY/BBhswRNrRrEsHBxjp4xByWPJgWh4j6nVL/yJ/7mkDqldjlWcflbeIcqPnHu6K9XLmvuErFpXgdr3/i5QNkrsiNow1Xs/Y3g==',key_name='tempest-TestNetworkAdvancedServerOps-1982445564',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-r97sgvh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:38:31Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=766b2be2-d46f-4f27-ad07-a91017eaddaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.447 247428 DEBUG nova.network.os_vif_util [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.448 247428 DEBUG nova.network.os_vif_util [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.449 247428 DEBUG nova.objects.instance [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'pci_devices' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.479 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <uuid>766b2be2-d46f-4f27-ad07-a91017eaddaf</uuid>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <name>instance-0000001a</name>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1438735807</nova:name>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:38:37</nova:creationTime>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <nova:user uuid="ffa1cd7ba9e543f78f2ef48c2a7a67a2">tempest-TestNetworkAdvancedServerOps-1357272614-project-member</nova:user>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <nova:project uuid="301bad5c2066428fa7f214024672bf92">tempest-TestNetworkAdvancedServerOps-1357272614</nova:project>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <nova:port uuid="17131365-352f-497d-ae25-1813ee58134e">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <entry name="serial">766b2be2-d46f-4f27-ad07-a91017eaddaf</entry>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <entry name="uuid">766b2be2-d46f-4f27-ad07-a91017eaddaf</entry>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/766b2be2-d46f-4f27-ad07-a91017eaddaf_disk">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:38:51:0d"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <target dev="tap17131365-35"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/console.log" append="off"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:38:38 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:38:38 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:38:38 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:38:38 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.480 247428 DEBUG nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Preparing to wait for external event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.481 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.481 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.481 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.482 247428 DEBUG nova.virt.libvirt.vif [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:38:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1438735807',display_name='tempest-TestNetworkAdvancedServerOps-server-1438735807',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1438735807',id=26,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAALh3ktKNgqM5WA8+0ghvMu82v6ROKkLY/BBhswRNrRrEsHBxjp4xByWPJgWh4j6nVL/yJ/7mkDqldjlWcflbeIcqPnHu6K9XLmvuErFpXgdr3/i5QNkrsiNow1Xs/Y3g==',key_name='tempest-TestNetworkAdvancedServerOps-1982445564',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-r97sgvh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:38:31Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=766b2be2-d46f-4f27-ad07-a91017eaddaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.482 247428 DEBUG nova.network.os_vif_util [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.483 247428 DEBUG nova.network.os_vif_util [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.483 247428 DEBUG os_vif [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.483 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.484 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.484 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.487 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.487 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17131365-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.487 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap17131365-35, col_values=(('external_ids', {'iface-id': '17131365-352f-497d-ae25-1813ee58134e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:51:0d', 'vm-uuid': '766b2be2-d46f-4f27-ad07-a91017eaddaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.511 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:38 np0005596060 NetworkManager[48900]: <info>  [1769452718.5125] manager: (tap17131365-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.516 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.521 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.523 247428 INFO os_vif [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35')#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.577 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.577 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.578 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No VIF found with MAC fa:16:3e:38:51:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.578 247428 INFO nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Using config drive#033[00m
Jan 26 13:38:38 np0005596060 nova_compute[247421]: 2026-01-26 18:38:38.602 247428 DEBUG nova.storage.rbd_utils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:38:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.515 247428 INFO nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Creating config drive at /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config#033[00m
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.520 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpey5odsch execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.631 247428 DEBUG nova.network.neutron [req-5f429c2b-e645-421e-bb3f-44212f98c293 req-a532379d-a664-4e20-98e7-37706af23655 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Updated VIF entry in instance network info cache for port 17131365-352f-497d-ae25-1813ee58134e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.632 247428 DEBUG nova.network.neutron [req-5f429c2b-e645-421e-bb3f-44212f98c293 req-a532379d-a664-4e20-98e7-37706af23655 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Updating instance_info_cache with network_info: [{"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:38:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:39.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.652 247428 DEBUG oslo_concurrency.lockutils [req-5f429c2b-e645-421e-bb3f-44212f98c293 req-a532379d-a664-4e20-98e7-37706af23655 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.653 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpey5odsch" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.684 247428 DEBUG nova.storage.rbd_utils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.687 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:38:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:39.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.838 247428 DEBUG oslo_concurrency.processutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.839 247428 INFO nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Deleting local config drive /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config because it was imported into RBD.#033[00m
Jan 26 13:38:39 np0005596060 kernel: tap17131365-35: entered promiscuous mode
Jan 26 13:38:39 np0005596060 NetworkManager[48900]: <info>  [1769452719.8888] manager: (tap17131365-35): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Jan 26 13:38:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:38:39Z|00139|binding|INFO|Claiming lport 17131365-352f-497d-ae25-1813ee58134e for this chassis.
Jan 26 13:38:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:38:39Z|00140|binding|INFO|17131365-352f-497d-ae25-1813ee58134e: Claiming fa:16:3e:38:51:0d 10.100.0.11
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.890 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.895 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.902 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:51:0d 10.100.0.11'], port_security=['fa:16:3e:38:51:0d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '766b2be2-d46f-4f27-ad07-a91017eaddaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68e765b7-d298-406e-a6ab-78affbd0449b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7c9f9a1a-237f-40ca-bc88-5b27d64f4698', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89adc109-8ed5-4ab7-a474-13d16cdb85c5, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=17131365-352f-497d-ae25-1813ee58134e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.903 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 17131365-352f-497d-ae25-1813ee58134e in datapath 68e765b7-d298-406e-a6ab-78affbd0449b bound to our chassis#033[00m
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.904 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 68e765b7-d298-406e-a6ab-78affbd0449b#033[00m
Jan 26 13:38:39 np0005596060 systemd-udevd[296127]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.916 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7256b530-b2fb-4a5c-a228-aca1ef246fbc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.917 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap68e765b7-d1 in ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.920 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap68e765b7-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.920 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a40ea9c3-3f15-4bed-9c8d-4a49538e7db2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:39 np0005596060 systemd-machined[213879]: New machine qemu-12-instance-0000001a.
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.922 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d6a3e6-c7fc-4e86-a7e7-fc508a042a51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:39 np0005596060 NetworkManager[48900]: <info>  [1769452719.9365] device (tap17131365-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.935 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6e2a0c-0613-4568-8ab4-c4fa7a9e0f96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:39 np0005596060 NetworkManager[48900]: <info>  [1769452719.9375] device (tap17131365-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.954 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:39 np0005596060 systemd[1]: Started Virtual Machine qemu-12-instance-0000001a.
Jan 26 13:38:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:38:39Z|00141|binding|INFO|Setting lport 17131365-352f-497d-ae25-1813ee58134e ovn-installed in OVS
Jan 26 13:38:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:38:39Z|00142|binding|INFO|Setting lport 17131365-352f-497d-ae25-1813ee58134e up in Southbound
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.958 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[627fa2b2-f7b5-4d50-ab4d-14b96d8e5401]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:39 np0005596060 nova_compute[247421]: 2026-01-26 18:38:39.959 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.989 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[1d740733-de07-415d-9fca-d4e1fe0730ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:39 np0005596060 NetworkManager[48900]: <info>  [1769452719.9962] manager: (tap68e765b7-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Jan 26 13:38:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:39.995 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0438f315-ce31-4345-874e-fcdca016e6c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.023 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[61cb02d1-ccc0-4d27-90fe-7006ae43dbcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.026 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6c545b-4e3c-4437-b579-dd4babf76d98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:40 np0005596060 NetworkManager[48900]: <info>  [1769452720.0439] device (tap68e765b7-d0): carrier: link connected
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.048 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[2580ee32-1c52-4f77-8c5a-eb44e41ba889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.067 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b526847d-c3ca-4a13-bf14-51fd427ccfe3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68e765b7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:e1:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648462, 'reachable_time': 42296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296159, 'error': None, 'target': 'ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.084 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5b060fdd-bb48-4617-83f1-cdadbffc590d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:e138'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 648462, 'tstamp': 648462}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296160, 'error': None, 'target': 'ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.102 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e4f24f-7ba3-4d96-b6af-6a019109e3b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68e765b7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:e1:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648462, 'reachable_time': 42296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296161, 'error': None, 'target': 'ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.135 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d702c36a-cd0b-46c5-9787-758a4e8ca3ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.194 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e25a1253-0322-4041-9c6a-bd19215a1627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.196 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68e765b7-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.197 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.197 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68e765b7-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.199 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:40 np0005596060 kernel: tap68e765b7-d0: entered promiscuous mode
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.201 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:40 np0005596060 NetworkManager[48900]: <info>  [1769452720.2037] manager: (tap68e765b7-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.203 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap68e765b7-d0, col_values=(('external_ids', {'iface-id': '6f449115-5c16-4848-80c5-24c990d0eb94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:38:40 np0005596060 ovn_controller[148842]: 2026-01-26T18:38:40Z|00143|binding|INFO|Releasing lport 6f449115-5c16-4848-80c5-24c990d0eb94 from this chassis (sb_readonly=0)
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.206 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.207 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/68e765b7-d298-406e-a6ab-78affbd0449b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/68e765b7-d298-406e-a6ab-78affbd0449b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.208 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1a4381-d484-4cbd-8825-0c6b851bc288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.209 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-68e765b7-d298-406e-a6ab-78affbd0449b
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/68e765b7-d298-406e-a6ab-78affbd0449b.pid.haproxy
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 68e765b7-d298-406e-a6ab-78affbd0449b
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:38:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:40.211 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b', 'env', 'PROCESS_TAG=haproxy-68e765b7-d298-406e-a6ab-78affbd0449b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/68e765b7-d298-406e-a6ab-78affbd0449b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.218 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.528 247428 DEBUG nova.compute.manager [req-2d63ad86-97e9-4576-b746-4cd536bac61f req-bdc2318b-bdf3-491e-a020-fe3878ab6bb4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.529 247428 DEBUG oslo_concurrency.lockutils [req-2d63ad86-97e9-4576-b746-4cd536bac61f req-bdc2318b-bdf3-491e-a020-fe3878ab6bb4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.530 247428 DEBUG oslo_concurrency.lockutils [req-2d63ad86-97e9-4576-b746-4cd536bac61f req-bdc2318b-bdf3-491e-a020-fe3878ab6bb4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.530 247428 DEBUG oslo_concurrency.lockutils [req-2d63ad86-97e9-4576-b746-4cd536bac61f req-bdc2318b-bdf3-491e-a020-fe3878ab6bb4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.530 247428 DEBUG nova.compute.manager [req-2d63ad86-97e9-4576-b746-4cd536bac61f req-bdc2318b-bdf3-491e-a020-fe3878ab6bb4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Processing event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:38:40 np0005596060 podman[296193]: 2026-01-26 18:38:40.583237903 +0000 UTC m=+0.055599620 container create 04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:38:40 np0005596060 systemd[1]: Started libpod-conmon-04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd.scope.
Jan 26 13:38:40 np0005596060 podman[296193]: 2026-01-26 18:38:40.552080459 +0000 UTC m=+0.024442196 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:38:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:38:40 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8634e2869f823f81855ac4564fdd87729a25d2f6049553a4d02bcb42938610c0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:38:40 np0005596060 podman[296193]: 2026-01-26 18:38:40.681760553 +0000 UTC m=+0.154122310 container init 04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:38:40 np0005596060 podman[296193]: 2026-01-26 18:38:40.688162234 +0000 UTC m=+0.160523961 container start 04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 13:38:40 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296209]: [NOTICE]   (296213) : New worker (296222) forked
Jan 26 13:38:40 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296209]: [NOTICE]   (296213) : Loading success.
Jan 26 13:38:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.889 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452720.889057, 766b2be2-d46f-4f27-ad07-a91017eaddaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.890 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] VM Started (Lifecycle Event)#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.892 247428 DEBUG nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.895 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.900 247428 INFO nova.virt.libvirt.driver [-] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Instance spawned successfully.#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.901 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.906 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.909 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.922 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.923 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.923 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.924 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.924 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.924 247428 DEBUG nova.virt.libvirt.driver [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.928 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.929 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452720.8892817, 766b2be2-d46f-4f27-ad07-a91017eaddaf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.929 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.966 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.969 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452720.8946788, 766b2be2-d46f-4f27-ad07-a91017eaddaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.969 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.985 247428 INFO nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Took 9.53 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.985 247428 DEBUG nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.986 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:38:40 np0005596060 nova_compute[247421]: 2026-01-26 18:38:40.992 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:38:41 np0005596060 nova_compute[247421]: 2026-01-26 18:38:41.019 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:38:41 np0005596060 nova_compute[247421]: 2026-01-26 18:38:41.048 247428 INFO nova.compute.manager [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Took 11.79 seconds to build instance.#033[00m
Jan 26 13:38:41 np0005596060 nova_compute[247421]: 2026-01-26 18:38:41.088 247428 DEBUG oslo_concurrency.lockutils [None req-86d0d5a6-8a3a-439e-93e0-d8956a47ffab ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:38:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:41.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:41.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:42 np0005596060 nova_compute[247421]: 2026-01-26 18:38:42.631 247428 DEBUG nova.compute.manager [req-56fc5a6d-0ecd-4d9a-8bf7-c2d1c3a9f1e6 req-139bca52-ba6a-4a57-963f-5faf4f456ebf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:38:42 np0005596060 nova_compute[247421]: 2026-01-26 18:38:42.632 247428 DEBUG oslo_concurrency.lockutils [req-56fc5a6d-0ecd-4d9a-8bf7-c2d1c3a9f1e6 req-139bca52-ba6a-4a57-963f-5faf4f456ebf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:38:42 np0005596060 nova_compute[247421]: 2026-01-26 18:38:42.633 247428 DEBUG oslo_concurrency.lockutils [req-56fc5a6d-0ecd-4d9a-8bf7-c2d1c3a9f1e6 req-139bca52-ba6a-4a57-963f-5faf4f456ebf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:38:42 np0005596060 nova_compute[247421]: 2026-01-26 18:38:42.633 247428 DEBUG oslo_concurrency.lockutils [req-56fc5a6d-0ecd-4d9a-8bf7-c2d1c3a9f1e6 req-139bca52-ba6a-4a57-963f-5faf4f456ebf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:38:42 np0005596060 nova_compute[247421]: 2026-01-26 18:38:42.633 247428 DEBUG nova.compute.manager [req-56fc5a6d-0ecd-4d9a-8bf7-c2d1c3a9f1e6 req-139bca52-ba6a-4a57-963f-5faf4f456ebf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:38:42 np0005596060 nova_compute[247421]: 2026-01-26 18:38:42.633 247428 WARNING nova.compute.manager [req-56fc5a6d-0ecd-4d9a-8bf7-c2d1c3a9f1e6 req-139bca52-ba6a-4a57-963f-5faf4f456ebf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received unexpected event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e for instance with vm_state active and task_state None.#033[00m
Jan 26 13:38:42 np0005596060 nova_compute[247421]: 2026-01-26 18:38:42.744 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 26 13:38:43 np0005596060 nova_compute[247421]: 2026-01-26 18:38:43.512 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:43 np0005596060 nova_compute[247421]: 2026-01-26 18:38:43.637 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:43 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:43.638 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:38:43 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:43.639 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:38:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:43.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:43.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:38:44
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'images', 'default.rgw.meta', '.rgw.root', '.mgr', 'backups']
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:38:44 np0005596060 ovn_controller[148842]: 2026-01-26T18:38:44Z|00144|binding|INFO|Releasing lport 6f449115-5c16-4848-80c5-24c990d0eb94 from this chassis (sb_readonly=0)
Jan 26 13:38:44 np0005596060 NetworkManager[48900]: <info>  [1769452724.3628] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Jan 26 13:38:44 np0005596060 NetworkManager[48900]: <info>  [1769452724.3642] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Jan 26 13:38:44 np0005596060 nova_compute[247421]: 2026-01-26 18:38:44.361 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:44 np0005596060 ovn_controller[148842]: 2026-01-26T18:38:44Z|00145|binding|INFO|Releasing lport 6f449115-5c16-4848-80c5-24c990d0eb94 from this chassis (sb_readonly=0)
Jan 26 13:38:44 np0005596060 nova_compute[247421]: 2026-01-26 18:38:44.394 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:44 np0005596060 nova_compute[247421]: 2026-01-26 18:38:44.398 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 627 KiB/s wr, 98 op/s
Jan 26 13:38:44 np0005596060 nova_compute[247421]: 2026-01-26 18:38:44.861 247428 DEBUG nova.compute.manager [req-db09be42-dbee-4087-806c-3eec63e0bac3 req-bc584099-7770-4f3d-853c-c54de1dfb551 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-changed-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:38:44 np0005596060 nova_compute[247421]: 2026-01-26 18:38:44.861 247428 DEBUG nova.compute.manager [req-db09be42-dbee-4087-806c-3eec63e0bac3 req-bc584099-7770-4f3d-853c-c54de1dfb551 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Refreshing instance network info cache due to event network-changed-17131365-352f-497d-ae25-1813ee58134e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:38:44 np0005596060 nova_compute[247421]: 2026-01-26 18:38:44.861 247428 DEBUG oslo_concurrency.lockutils [req-db09be42-dbee-4087-806c-3eec63e0bac3 req-bc584099-7770-4f3d-853c-c54de1dfb551 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:38:44 np0005596060 nova_compute[247421]: 2026-01-26 18:38:44.862 247428 DEBUG oslo_concurrency.lockutils [req-db09be42-dbee-4087-806c-3eec63e0bac3 req-bc584099-7770-4f3d-853c-c54de1dfb551 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:38:44 np0005596060 nova_compute[247421]: 2026-01-26 18:38:44.862 247428 DEBUG nova.network.neutron [req-db09be42-dbee-4087-806c-3eec63e0bac3 req-bc584099-7770-4f3d-853c-c54de1dfb551 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Refreshing network info cache for port 17131365-352f-497d-ae25-1813ee58134e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:38:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:38:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:45.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:45.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:46 np0005596060 nova_compute[247421]: 2026-01-26 18:38:46.084 247428 DEBUG nova.network.neutron [req-db09be42-dbee-4087-806c-3eec63e0bac3 req-bc584099-7770-4f3d-853c-c54de1dfb551 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Updated VIF entry in instance network info cache for port 17131365-352f-497d-ae25-1813ee58134e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:38:46 np0005596060 nova_compute[247421]: 2026-01-26 18:38:46.085 247428 DEBUG nova.network.neutron [req-db09be42-dbee-4087-806c-3eec63e0bac3 req-bc584099-7770-4f3d-853c-c54de1dfb551 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Updating instance_info_cache with network_info: [{"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:38:46 np0005596060 nova_compute[247421]: 2026-01-26 18:38:46.107 247428 DEBUG oslo_concurrency.lockutils [req-db09be42-dbee-4087-806c-3eec63e0bac3 req-bc584099-7770-4f3d-853c-c54de1dfb551 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:38:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 56 KiB/s wr, 85 op/s
Jan 26 13:38:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:47.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:47 np0005596060 nova_compute[247421]: 2026-01-26 18:38:47.805 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:47.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:48 np0005596060 nova_compute[247421]: 2026-01-26 18:38:48.514 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:48 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:38:48.641 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:38:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:38:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:49.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:49.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 88 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:38:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:51.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:51.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 95 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 892 KiB/s wr, 87 op/s
Jan 26 13:38:52 np0005596060 nova_compute[247421]: 2026-01-26 18:38:52.808 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:53 np0005596060 ovn_controller[148842]: 2026-01-26T18:38:53Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:51:0d 10.100.0.11
Jan 26 13:38:53 np0005596060 ovn_controller[148842]: 2026-01-26T18:38:53Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:51:0d 10.100.0.11
Jan 26 13:38:53 np0005596060 nova_compute[247421]: 2026-01-26 18:38:53.554 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:53.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:53.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:38:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9822 writes, 42K keys, 9818 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 9822 writes, 9818 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1560 writes, 6977 keys, 1560 commit groups, 1.0 writes per commit group, ingest: 10.78 MB, 0.02 MB/s#012Interval WAL: 1560 writes, 1560 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     17.8      3.16              0.21        26    0.121       0      0       0.0       0.0#012  L6      1/0   10.45 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   3.9     67.5     55.9      3.92              0.69        25    0.157    140K    14K       0.0       0.0#012 Sum      1/0   10.45 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   4.9     37.4     38.9      7.08              0.90        51    0.139    140K    14K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.6    135.3    138.9      0.50              0.18        12    0.042     41K   3099       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     67.5     55.9      3.92              0.69        25    0.157    140K    14K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     17.8      3.15              0.21        25    0.126       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.055, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.08 MB/s write, 0.26 GB read, 0.07 MB/s read, 7.1 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5652937211f0#2 capacity: 304.00 MB usage: 31.76 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000231 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1846,30.70 MB,10.0995%) FilterBlock(52,398.11 KB,0.127888%) IndexBlock(52,687.73 KB,0.220926%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 26 13:38:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 98 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 799 KiB/s rd, 1.3 MiB/s wr, 54 op/s
Jan 26 13:38:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:55.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:55 np0005596060 podman[296325]: 2026-01-26 18:38:55.795686802 +0000 UTC m=+0.055980270 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 26 13:38:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:55.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:55 np0005596060 podman[296326]: 2026-01-26 18:38:55.898226582 +0000 UTC m=+0.158467109 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 13:38:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:38:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 115 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 243 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 26 13:38:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:57.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:57 np0005596060 nova_compute[247421]: 2026-01-26 18:38:57.810 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:38:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:57.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:38:58 np0005596060 nova_compute[247421]: 2026-01-26 18:38:58.555 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:38:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:38:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:38:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:38:59.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:38:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:38:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:38:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:38:59.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:39:00 np0005596060 nova_compute[247421]: 2026-01-26 18:39:00.450 247428 INFO nova.compute.manager [None req-dfdba35c-4fae-414a-a6f2-9038955543dd ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Get console output#033[00m
Jan 26 13:39:00 np0005596060 nova_compute[247421]: 2026-01-26 18:39:00.455 285734 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 26 13:39:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 355 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:39:01 np0005596060 nova_compute[247421]: 2026-01-26 18:39:01.456 247428 INFO nova.compute.manager [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Rebuilding instance#033[00m
Jan 26 13:39:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:01.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:01 np0005596060 nova_compute[247421]: 2026-01-26 18:39:01.673 247428 DEBUG nova.objects.instance [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:39:01 np0005596060 nova_compute[247421]: 2026-01-26 18:39:01.689 247428 DEBUG nova.compute.manager [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:39:01 np0005596060 nova_compute[247421]: 2026-01-26 18:39:01.730 247428 DEBUG nova.objects.instance [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'pci_requests' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:39:01 np0005596060 nova_compute[247421]: 2026-01-26 18:39:01.742 247428 DEBUG nova.objects.instance [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'pci_devices' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:39:01 np0005596060 nova_compute[247421]: 2026-01-26 18:39:01.754 247428 DEBUG nova.objects.instance [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'resources' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:39:01 np0005596060 nova_compute[247421]: 2026-01-26 18:39:01.767 247428 DEBUG nova.objects.instance [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'migration_context' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:39:01 np0005596060 nova_compute[247421]: 2026-01-26 18:39:01.790 247428 DEBUG nova.objects.instance [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 26 13:39:01 np0005596060 nova_compute[247421]: 2026-01-26 18:39:01.794 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 26 13:39:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:01.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 357 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 13:39:02 np0005596060 nova_compute[247421]: 2026-01-26 18:39:02.813 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:03 np0005596060 nova_compute[247421]: 2026-01-26 18:39:03.598 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:03.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:03.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:03 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021719564256467523 of space, bias 1.0, pg target 0.6515869276940257 quantized to 32 (current 32)
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:39:04 np0005596060 kernel: tap17131365-35 (unregistering): left promiscuous mode
Jan 26 13:39:04 np0005596060 NetworkManager[48900]: <info>  [1769452744.0769] device (tap17131365-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00146|binding|INFO|Releasing lport 17131365-352f-497d-ae25-1813ee58134e from this chassis (sb_readonly=0)
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00147|binding|INFO|Setting lport 17131365-352f-497d-ae25-1813ee58134e down in Southbound
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00148|binding|INFO|Removing iface tap17131365-35 ovn-installed in OVS
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.087 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.094 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:51:0d 10.100.0.11'], port_security=['fa:16:3e:38:51:0d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '766b2be2-d46f-4f27-ad07-a91017eaddaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68e765b7-d298-406e-a6ab-78affbd0449b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c9f9a1a-237f-40ca-bc88-5b27d64f4698', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89adc109-8ed5-4ab7-a474-13d16cdb85c5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=17131365-352f-497d-ae25-1813ee58134e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.095 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 17131365-352f-497d-ae25-1813ee58134e in datapath 68e765b7-d298-406e-a6ab-78affbd0449b unbound from our chassis#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.096 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 68e765b7-d298-406e-a6ab-78affbd0449b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.099 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[8f261ac3-9084-43cd-bc59-9fc3d960f6fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.099 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b namespace which is not needed anymore#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.107 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 26 13:39:04 np0005596060 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001a.scope: Consumed 13.921s CPU time.
Jan 26 13:39:04 np0005596060 systemd-machined[213879]: Machine qemu-12-instance-0000001a terminated.
Jan 26 13:39:04 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296209]: [NOTICE]   (296213) : haproxy version is 2.8.14-c23fe91
Jan 26 13:39:04 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296209]: [NOTICE]   (296213) : path to executable is /usr/sbin/haproxy
Jan 26 13:39:04 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296209]: [WARNING]  (296213) : Exiting Master process...
Jan 26 13:39:04 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296209]: [WARNING]  (296213) : Exiting Master process...
Jan 26 13:39:04 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296209]: [ALERT]    (296213) : Current worker (296222) exited with code 143 (Terminated)
Jan 26 13:39:04 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296209]: [WARNING]  (296213) : All workers exited. Exiting... (0)
Jan 26 13:39:04 np0005596060 systemd[1]: libpod-04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd.scope: Deactivated successfully.
Jan 26 13:39:04 np0005596060 podman[296446]: 2026-01-26 18:39:04.247119621 +0000 UTC m=+0.047148878 container died 04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:39:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd-userdata-shm.mount: Deactivated successfully.
Jan 26 13:39:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8634e2869f823f81855ac4564fdd87729a25d2f6049553a4d02bcb42938610c0-merged.mount: Deactivated successfully.
Jan 26 13:39:04 np0005596060 podman[296446]: 2026-01-26 18:39:04.285732463 +0000 UTC m=+0.085761740 container cleanup 04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 13:39:04 np0005596060 systemd[1]: libpod-conmon-04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd.scope: Deactivated successfully.
Jan 26 13:39:04 np0005596060 kernel: tap17131365-35: entered promiscuous mode
Jan 26 13:39:04 np0005596060 kernel: tap17131365-35 (unregistering): left promiscuous mode
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.314 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00149|binding|INFO|Claiming lport 17131365-352f-497d-ae25-1813ee58134e for this chassis.
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00150|binding|INFO|17131365-352f-497d-ae25-1813ee58134e: Claiming fa:16:3e:38:51:0d 10.100.0.11
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.321 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:51:0d 10.100.0.11'], port_security=['fa:16:3e:38:51:0d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '766b2be2-d46f-4f27-ad07-a91017eaddaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68e765b7-d298-406e-a6ab-78affbd0449b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c9f9a1a-237f-40ca-bc88-5b27d64f4698', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89adc109-8ed5-4ab7-a474-13d16cdb85c5, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=17131365-352f-497d-ae25-1813ee58134e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00151|binding|INFO|Setting lport 17131365-352f-497d-ae25-1813ee58134e ovn-installed in OVS
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00152|binding|INFO|Setting lport 17131365-352f-497d-ae25-1813ee58134e up in Southbound
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00153|binding|INFO|Releasing lport 17131365-352f-497d-ae25-1813ee58134e from this chassis (sb_readonly=1)
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00154|if_status|INFO|Not setting lport 17131365-352f-497d-ae25-1813ee58134e down as sb is readonly
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.336 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00155|binding|INFO|Removing iface tap17131365-35 ovn-installed in OVS
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.338 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00156|binding|INFO|Releasing lport 17131365-352f-497d-ae25-1813ee58134e from this chassis (sb_readonly=0)
Jan 26 13:39:04 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:04Z|00157|binding|INFO|Setting lport 17131365-352f-497d-ae25-1813ee58134e down in Southbound
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.346 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:51:0d 10.100.0.11'], port_security=['fa:16:3e:38:51:0d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '766b2be2-d46f-4f27-ad07-a91017eaddaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68e765b7-d298-406e-a6ab-78affbd0449b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7c9f9a1a-237f-40ca-bc88-5b27d64f4698', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89adc109-8ed5-4ab7-a474-13d16cdb85c5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=17131365-352f-497d-ae25-1813ee58134e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.350 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 podman[296480]: 2026-01-26 18:39:04.355582711 +0000 UTC m=+0.048149703 container remove 04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.361 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[97f587bb-3054-45a6-879a-a0c5b4281ee5]: (4, ('Mon Jan 26 06:39:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b (04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd)\n04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd\nMon Jan 26 06:39:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b (04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd)\n04f7ba3744e78f17e1604b4f26a59002ccf34456295797f95d6c6156b5c3d0fd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.363 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab2ea5e-6535-4372-a6ea-72d24313653f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.364 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68e765b7-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.366 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 kernel: tap68e765b7-d0: left promiscuous mode
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.380 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.382 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[3a132294-d265-4a4e-a5b6-65d53cddf1d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.396 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b8cb05ef-f8e8-4f4b-9fa1-34780097baff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.398 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[90b910de-8859-4e0a-a0c2-248edd02b258]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.415 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[342ac76e-1b43-4193-99ae-e4f7278cee48]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 648456, 'reachable_time': 18193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296509, 'error': None, 'target': 'ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:04 np0005596060 systemd[1]: run-netns-ovnmeta\x2d68e765b7\x2dd298\x2d406e\x2da6ab\x2d78affbd0449b.mount: Deactivated successfully.
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.419 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.419 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[88b46e42-c505-4acb-8c44-27dee3e4af50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.420 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 17131365-352f-497d-ae25-1813ee58134e in datapath 68e765b7-d298-406e-a6ab-78affbd0449b unbound from our chassis#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.421 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 68e765b7-d298-406e-a6ab-78affbd0449b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.422 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7e4096a2-1cc1-4b3a-be2d-bf3e2146ffda]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.422 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 17131365-352f-497d-ae25-1813ee58134e in datapath 68e765b7-d298-406e-a6ab-78affbd0449b unbound from our chassis#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.423 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 68e765b7-d298-406e-a6ab-78affbd0449b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:39:04 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:04.424 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2a0c8e-317a-4c57-970e-058312411d26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 235 KiB/s rd, 1.3 MiB/s wr, 54 op/s
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.810 247428 INFO nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Instance shutdown successfully after 3 seconds.#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.814 247428 INFO nova.virt.libvirt.driver [-] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Instance destroyed successfully.#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.819 247428 INFO nova.virt.libvirt.driver [-] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Instance destroyed successfully.#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.820 247428 DEBUG nova.virt.libvirt.vif [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:38:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1438735807',display_name='tempest-TestNetworkAdvancedServerOps-server-1438735807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1438735807',id=26,image_ref='be7b1750-5d13-441e-bf97-67d885906c42',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAALh3ktKNgqM5WA8+0ghvMu82v6ROKkLY/BBhswRNrRrEsHBxjp4xByWPJgWh4j6nVL/yJ/7mkDqldjlWcflbeIcqPnHu6K9XLmvuErFpXgdr3/i5QNkrsiNow1Xs/Y3g==',key_name='tempest-TestNetworkAdvancedServerOps-1982445564',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-r97sgvh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='be7b1750-5d13-441e-bf97-67d885906c42',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:39:00Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=766b2be2-d46f-4f27-ad07-a91017eaddaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.820 247428 DEBUG nova.network.os_vif_util [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.821 247428 DEBUG nova.network.os_vif_util [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.821 247428 DEBUG os_vif [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.823 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.823 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17131365-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.824 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.826 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.826 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:04 np0005596060 nova_compute[247421]: 2026-01-26 18:39:04.828 247428 INFO os_vif [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35')#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.282 247428 INFO nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Deleting instance files /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf_del#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.282 247428 INFO nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Deletion of /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf_del complete#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.441 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.442 247428 INFO nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Creating image(s)#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.469 247428 DEBUG nova.storage.rbd_utils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.499 247428 DEBUG nova.storage.rbd_utils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.527 247428 DEBUG nova.storage.rbd_utils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.531 247428 DEBUG oslo_concurrency.lockutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "845aad0744c07ae3a06850747475706fc56a381e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.532 247428 DEBUG oslo_concurrency.lockutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "845aad0744c07ae3a06850747475706fc56a381e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:05.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.729 247428 DEBUG nova.virt.libvirt.imagebackend [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image locations are: [{'url': 'rbd://d4cd1917-5876-51b6-bc64-65a16199754d/images/be7b1750-5d13-441e-bf97-67d885906c42/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d4cd1917-5876-51b6-bc64-65a16199754d/images/be7b1750-5d13-441e-bf97-67d885906c42/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 26 13:39:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:05.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.970 247428 DEBUG nova.compute.manager [req-d9a0bed5-6ebd-4ef3-9f0c-cb87203138dc req-b1415728-d453-4961-83b8-059eb07abaf4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-unplugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.971 247428 DEBUG oslo_concurrency.lockutils [req-d9a0bed5-6ebd-4ef3-9f0c-cb87203138dc req-b1415728-d453-4961-83b8-059eb07abaf4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.971 247428 DEBUG oslo_concurrency.lockutils [req-d9a0bed5-6ebd-4ef3-9f0c-cb87203138dc req-b1415728-d453-4961-83b8-059eb07abaf4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.971 247428 DEBUG oslo_concurrency.lockutils [req-d9a0bed5-6ebd-4ef3-9f0c-cb87203138dc req-b1415728-d453-4961-83b8-059eb07abaf4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.971 247428 DEBUG nova.compute.manager [req-d9a0bed5-6ebd-4ef3-9f0c-cb87203138dc req-b1415728-d453-4961-83b8-059eb07abaf4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-unplugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:39:05 np0005596060 nova_compute[247421]: 2026-01-26 18:39:05.972 247428 WARNING nova.compute.manager [req-d9a0bed5-6ebd-4ef3-9f0c-cb87203138dc req-b1415728-d453-4961-83b8-059eb07abaf4 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received unexpected event network-vif-unplugged-17131365-352f-497d-ae25-1813ee58134e for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 26 13:39:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 99 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 186 KiB/s rd, 913 KiB/s wr, 57 op/s
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.035 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.130 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e.part --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.132 247428 DEBUG nova.virt.images [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] be7b1750-5d13-441e-bf97-67d885906c42 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.132 247428 DEBUG nova.privsep.utils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.133 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e.part /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.329 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e.part /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e.converted" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.334 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.411 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e.converted --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.412 247428 DEBUG oslo_concurrency.lockutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "845aad0744c07ae3a06850747475706fc56a381e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.881s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.436 247428 DEBUG nova.storage.rbd_utils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.439 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:07.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.775 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.841 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.847 247428 DEBUG nova.storage.rbd_utils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] resizing rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:39:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:07.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.958 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.959 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Ensure instance console log exists: /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.960 247428 DEBUG oslo_concurrency.lockutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.960 247428 DEBUG oslo_concurrency.lockutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.961 247428 DEBUG oslo_concurrency.lockutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.963 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Start _get_guest_xml network_info=[{"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:29Z,direct_url=<?>,disk_format='qcow2',id=be7b1750-5d13-441e-bf97-67d885906c42,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.967 247428 WARNING nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.973 247428 DEBUG nova.virt.libvirt.host [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.974 247428 DEBUG nova.virt.libvirt.host [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.976 247428 DEBUG nova.virt.libvirt.host [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.977 247428 DEBUG nova.virt.libvirt.host [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.978 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.978 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:29Z,direct_url=<?>,disk_format='qcow2',id=be7b1750-5d13-441e-bf97-67d885906c42,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.979 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.979 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.979 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.979 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.980 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.980 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.980 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.981 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.981 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.981 247428 DEBUG nova.virt.hardware [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:39:07 np0005596060 nova_compute[247421]: 2026-01-26 18:39:07.981 247428 DEBUG nova.objects.instance [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.002 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.074 247428 DEBUG nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.075 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.076 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.076 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.076 247428 DEBUG nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.076 247428 WARNING nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received unexpected event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.077 247428 DEBUG nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.077 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.077 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.077 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.078 247428 DEBUG nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.078 247428 WARNING nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received unexpected event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.078 247428 DEBUG nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.078 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.079 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.079 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.079 247428 DEBUG nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.079 247428 WARNING nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received unexpected event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.080 247428 DEBUG nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-unplugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.080 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.080 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.080 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.081 247428 DEBUG nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-unplugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.081 247428 WARNING nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received unexpected event network-vif-unplugged-17131365-352f-497d-ae25-1813ee58134e for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.081 247428 DEBUG nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.081 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.082 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.082 247428 DEBUG oslo_concurrency.lockutils [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.082 247428 DEBUG nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.082 247428 WARNING nova.compute.manager [req-fc398033-3753-45ab-84c0-9048b29b5fdd req-a0f29e24-4571-42ce-8d19-6529e66fbf87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received unexpected event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 26 13:39:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:39:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465853042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.424 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.449 247428 DEBUG nova.storage.rbd_utils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.453 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 73 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 63 op/s
Jan 26 13:39:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:39:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3930082911' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.899 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.901 247428 DEBUG nova.virt.libvirt.vif [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T18:38:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1438735807',display_name='tempest-TestNetworkAdvancedServerOps-server-1438735807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1438735807',id=26,image_ref='be7b1750-5d13-441e-bf97-67d885906c42',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAALh3ktKNgqM5WA8+0ghvMu82v6ROKkLY/BBhswRNrRrEsHBxjp4xByWPJgWh4j6nVL/yJ/7mkDqldjlWcflbeIcqPnHu6K9XLmvuErFpXgdr3/i5QNkrsiNow1Xs/Y3g==',key_name='tempest-TestNetworkAdvancedServerOps-1982445564',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-r97sgvh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='be7b1750-5d13-441e-bf97-67d885906c42',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:39:05Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=766b2be2-d46f-4f27-ad07-a91017eaddaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.901 247428 DEBUG nova.network.os_vif_util [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.902 247428 DEBUG nova.network.os_vif_util [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.904 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <uuid>766b2be2-d46f-4f27-ad07-a91017eaddaf</uuid>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <name>instance-0000001a</name>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1438735807</nova:name>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:39:07</nova:creationTime>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <nova:user uuid="ffa1cd7ba9e543f78f2ef48c2a7a67a2">tempest-TestNetworkAdvancedServerOps-1357272614-project-member</nova:user>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <nova:project uuid="301bad5c2066428fa7f214024672bf92">tempest-TestNetworkAdvancedServerOps-1357272614</nova:project>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="be7b1750-5d13-441e-bf97-67d885906c42"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <nova:port uuid="17131365-352f-497d-ae25-1813ee58134e">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <entry name="serial">766b2be2-d46f-4f27-ad07-a91017eaddaf</entry>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <entry name="uuid">766b2be2-d46f-4f27-ad07-a91017eaddaf</entry>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/766b2be2-d46f-4f27-ad07-a91017eaddaf_disk">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:38:51:0d"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <target dev="tap17131365-35"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/console.log" append="off"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:39:08 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:39:08 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:39:08 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:39:08 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.905 247428 DEBUG nova.virt.libvirt.vif [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T18:38:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1438735807',display_name='tempest-TestNetworkAdvancedServerOps-server-1438735807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1438735807',id=26,image_ref='be7b1750-5d13-441e-bf97-67d885906c42',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAALh3ktKNgqM5WA8+0ghvMu82v6ROKkLY/BBhswRNrRrEsHBxjp4xByWPJgWh4j6nVL/yJ/7mkDqldjlWcflbeIcqPnHu6K9XLmvuErFpXgdr3/i5QNkrsiNow1Xs/Y3g==',key_name='tempest-TestNetworkAdvancedServerOps-1982445564',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:38:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-r97sgvh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='be7b1750-5d13-441e-bf97-67d885906c42',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:39:05Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=766b2be2-d46f-4f27-ad07-a91017eaddaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.905 247428 DEBUG nova.network.os_vif_util [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.906 247428 DEBUG nova.network.os_vif_util [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.906 247428 DEBUG os_vif [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.907 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.907 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.908 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.910 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.910 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap17131365-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.911 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap17131365-35, col_values=(('external_ids', {'iface-id': '17131365-352f-497d-ae25-1813ee58134e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:38:51:0d', 'vm-uuid': '766b2be2-d46f-4f27-ad07-a91017eaddaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:08 np0005596060 NetworkManager[48900]: <info>  [1769452748.9136] manager: (tap17131365-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.915 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.918 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.920 247428 INFO os_vif [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35')#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.992 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.993 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.993 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No VIF found with MAC fa:16:3e:38:51:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:39:08 np0005596060 nova_compute[247421]: 2026-01-26 18:39:08.993 247428 INFO nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Using config drive#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.021 247428 DEBUG nova.storage.rbd_utils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.043 247428 DEBUG nova.objects.instance [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.071 247428 DEBUG nova.objects.instance [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'keypairs' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.481 247428 INFO nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Creating config drive at /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.487 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgfy2qtul execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.638 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgfy2qtul" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:09.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.671 247428 DEBUG nova.storage.rbd_utils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.676 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.848 247428 DEBUG oslo_concurrency.processutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config 766b2be2-d46f-4f27-ad07-a91017eaddaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.849 247428 INFO nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Deleting local config drive /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf/disk.config because it was imported into RBD.#033[00m
Jan 26 13:39:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:09.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:09 np0005596060 kernel: tap17131365-35: entered promiscuous mode
Jan 26 13:39:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:09Z|00158|binding|INFO|Claiming lport 17131365-352f-497d-ae25-1813ee58134e for this chassis.
Jan 26 13:39:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:09Z|00159|binding|INFO|17131365-352f-497d-ae25-1813ee58134e: Claiming fa:16:3e:38:51:0d 10.100.0.11
Jan 26 13:39:09 np0005596060 NetworkManager[48900]: <info>  [1769452749.8950] manager: (tap17131365-35): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.894 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.902 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:51:0d 10.100.0.11'], port_security=['fa:16:3e:38:51:0d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '766b2be2-d46f-4f27-ad07-a91017eaddaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68e765b7-d298-406e-a6ab-78affbd0449b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '7', 'neutron:security_group_ids': '7c9f9a1a-237f-40ca-bc88-5b27d64f4698', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89adc109-8ed5-4ab7-a474-13d16cdb85c5, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=17131365-352f-497d-ae25-1813ee58134e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.904 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 17131365-352f-497d-ae25-1813ee58134e in datapath 68e765b7-d298-406e-a6ab-78affbd0449b bound to our chassis#033[00m
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.905 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 68e765b7-d298-406e-a6ab-78affbd0449b#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.910 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:09Z|00160|binding|INFO|Setting lport 17131365-352f-497d-ae25-1813ee58134e ovn-installed in OVS
Jan 26 13:39:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:09Z|00161|binding|INFO|Setting lport 17131365-352f-497d-ae25-1813ee58134e up in Southbound
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.913 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:09 np0005596060 nova_compute[247421]: 2026-01-26 18:39:09.914 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.918 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[bb387b96-eaf3-431e-849f-1435c1ae541e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.920 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap68e765b7-d1 in ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.922 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap68e765b7-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.923 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[74eab4f3-b78a-4381-b00f-b669bccff52c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.923 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[bedcaa13-0abe-4c7d-9897-2d8c90350daf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:09 np0005596060 systemd-udevd[296846]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:39:09 np0005596060 systemd-machined[213879]: New machine qemu-13-instance-0000001a.
Jan 26 13:39:09 np0005596060 NetworkManager[48900]: <info>  [1769452749.9387] device (tap17131365-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:39:09 np0005596060 NetworkManager[48900]: <info>  [1769452749.9395] device (tap17131365-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.940 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[29272f0e-1e7f-4e96-9f09-162c8ebf7c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:09 np0005596060 systemd[1]: Started Virtual Machine qemu-13-instance-0000001a.
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.956 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7adaa32e-6490-4ebf-af43-19e139086321]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.987 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[ae9c49aa-d0a3-4250-8014-ba1eb0157e68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:09 np0005596060 systemd-udevd[296849]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:39:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:09.992 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a82013-c667-4c9b-a922-02832e06639a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:09 np0005596060 NetworkManager[48900]: <info>  [1769452749.9942] manager: (tap68e765b7-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.024 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[2575cda2-d24d-43d0-b873-2897eecc51b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.027 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[110e2eea-0cf1-41d4-8c39-a1953d8514b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:10 np0005596060 NetworkManager[48900]: <info>  [1769452750.0547] device (tap68e765b7-d0): carrier: link connected
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.062 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2b2c3e-7575-400e-b955-971d5b9d37f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.080 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[bfb71f05-97b8-4f20-b9ff-2a7f24ef2451]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68e765b7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:e1:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651463, 'reachable_time': 34471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296878, 'error': None, 'target': 'ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.097 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[24ad51c2-dbed-4bb7-bc10-b9d0ea77b962]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:e138'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651463, 'tstamp': 651463}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296879, 'error': None, 'target': 'ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.116 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[21387788-255e-480e-a54c-f36ff96b9e14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68e765b7-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:e1:38'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651463, 'reachable_time': 34471, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296880, 'error': None, 'target': 'ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.146 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f827a410-bd4c-4582-9939-a26fc5e115ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.184 247428 DEBUG nova.compute.manager [req-63845cd5-c7a1-4459-9ca1-7ecd51ab064d req-8c15d4cd-c389-4010-af8c-bcb24161bf85 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.184 247428 DEBUG oslo_concurrency.lockutils [req-63845cd5-c7a1-4459-9ca1-7ecd51ab064d req-8c15d4cd-c389-4010-af8c-bcb24161bf85 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.185 247428 DEBUG oslo_concurrency.lockutils [req-63845cd5-c7a1-4459-9ca1-7ecd51ab064d req-8c15d4cd-c389-4010-af8c-bcb24161bf85 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.185 247428 DEBUG oslo_concurrency.lockutils [req-63845cd5-c7a1-4459-9ca1-7ecd51ab064d req-8c15d4cd-c389-4010-af8c-bcb24161bf85 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.185 247428 DEBUG nova.compute.manager [req-63845cd5-c7a1-4459-9ca1-7ecd51ab064d req-8c15d4cd-c389-4010-af8c-bcb24161bf85 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.185 247428 WARNING nova.compute.manager [req-63845cd5-c7a1-4459-9ca1-7ecd51ab064d req-8c15d4cd-c389-4010-af8c-bcb24161bf85 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received unexpected event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.221 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5603a15c-abb7-4378-8862-ef612ea9fbc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.223 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68e765b7-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.223 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.223 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68e765b7-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.225 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:10 np0005596060 NetworkManager[48900]: <info>  [1769452750.2264] manager: (tap68e765b7-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Jan 26 13:39:10 np0005596060 kernel: tap68e765b7-d0: entered promiscuous mode
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.233 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.235 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap68e765b7-d0, col_values=(('external_ids', {'iface-id': '6f449115-5c16-4848-80c5-24c990d0eb94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.236 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:10 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:10Z|00162|binding|INFO|Releasing lport 6f449115-5c16-4848-80c5-24c990d0eb94 from this chassis (sb_readonly=0)
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.237 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.238 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/68e765b7-d298-406e-a6ab-78affbd0449b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/68e765b7-d298-406e-a6ab-78affbd0449b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.239 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[936f0e84-98ba-45fa-b2e1-2d8efe4d9adc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.239 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-68e765b7-d298-406e-a6ab-78affbd0449b
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/68e765b7-d298-406e-a6ab-78affbd0449b.pid.haproxy
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 68e765b7-d298-406e-a6ab-78affbd0449b
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:39:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:10.240 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b', 'env', 'PROCESS_TAG=haproxy-68e765b7-d298-406e-a6ab-78affbd0449b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/68e765b7-d298-406e-a6ab-78affbd0449b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.249 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:10 np0005596060 podman[296928]: 2026-01-26 18:39:10.63747992 +0000 UTC m=+0.046778628 container create 7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 13:39:10 np0005596060 systemd[1]: Started libpod-conmon-7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a.scope.
Jan 26 13:39:10 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:39:10 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/990893bf38d45517bc80c22508b10c9449e73d6f9e09a914efde4c5d7693874b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:10 np0005596060 podman[296928]: 2026-01-26 18:39:10.610963233 +0000 UTC m=+0.020261971 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:39:10 np0005596060 podman[296928]: 2026-01-26 18:39:10.708273762 +0000 UTC m=+0.117572480 container init 7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:39:10 np0005596060 podman[296928]: 2026-01-26 18:39:10.714292253 +0000 UTC m=+0.123590961 container start 7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:39:10 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296967]: [NOTICE]   (296972) : New worker (296975) forked
Jan 26 13:39:10 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296967]: [NOTICE]   (296972) : Loading success.
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.753 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Removed pending event for 766b2be2-d46f-4f27-ad07-a91017eaddaf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.754 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452750.7530303, 766b2be2-d46f-4f27-ad07-a91017eaddaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.755 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.757 247428 DEBUG nova.compute.manager [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.758 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.760 247428 INFO nova.virt.libvirt.driver [-] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Instance spawned successfully.#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.761 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:39:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 73 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.2 MiB/s wr, 42 op/s
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.845 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.848 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.865 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.865 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.866 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.866 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.866 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.867 247428 DEBUG nova.virt.libvirt.driver [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.922 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.922 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452750.754669, 766b2be2-d46f-4f27-ad07-a91017eaddaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.923 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] VM Started (Lifecycle Event)#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.973 247428 DEBUG nova.compute.manager [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.981 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:39:10 np0005596060 nova_compute[247421]: 2026-01-26 18:39:10.984 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:39:11 np0005596060 nova_compute[247421]: 2026-01-26 18:39:11.012 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 26 13:39:11 np0005596060 nova_compute[247421]: 2026-01-26 18:39:11.053 247428 DEBUG oslo_concurrency.lockutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:11 np0005596060 nova_compute[247421]: 2026-01-26 18:39:11.053 247428 DEBUG oslo_concurrency.lockutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:11 np0005596060 nova_compute[247421]: 2026-01-26 18:39:11.053 247428 DEBUG nova.objects.instance [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 26 13:39:11 np0005596060 nova_compute[247421]: 2026-01-26 18:39:11.199 247428 DEBUG oslo_concurrency.lockutils [None req-a45c472a-9ff4-46b7-88b6-86310ddafe8c ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:11.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:11.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:12 np0005596060 nova_compute[247421]: 2026-01-26 18:39:12.296 247428 DEBUG nova.compute.manager [req-2d84da24-6e3a-44f9-806b-910454b7628c req-f02c5027-21c5-4c54-918d-a92d1889c7cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:12 np0005596060 nova_compute[247421]: 2026-01-26 18:39:12.296 247428 DEBUG oslo_concurrency.lockutils [req-2d84da24-6e3a-44f9-806b-910454b7628c req-f02c5027-21c5-4c54-918d-a92d1889c7cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:12 np0005596060 nova_compute[247421]: 2026-01-26 18:39:12.296 247428 DEBUG oslo_concurrency.lockutils [req-2d84da24-6e3a-44f9-806b-910454b7628c req-f02c5027-21c5-4c54-918d-a92d1889c7cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:12 np0005596060 nova_compute[247421]: 2026-01-26 18:39:12.296 247428 DEBUG oslo_concurrency.lockutils [req-2d84da24-6e3a-44f9-806b-910454b7628c req-f02c5027-21c5-4c54-918d-a92d1889c7cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:12 np0005596060 nova_compute[247421]: 2026-01-26 18:39:12.296 247428 DEBUG nova.compute.manager [req-2d84da24-6e3a-44f9-806b-910454b7628c req-f02c5027-21c5-4c54-918d-a92d1889c7cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:39:12 np0005596060 nova_compute[247421]: 2026-01-26 18:39:12.297 247428 WARNING nova.compute.manager [req-2d84da24-6e3a-44f9-806b-910454b7628c req-f02c5027-21c5-4c54-918d-a92d1889c7cc 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received unexpected event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e for instance with vm_state active and task_state None.#033[00m
Jan 26 13:39:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 88 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 26 13:39:12 np0005596060 nova_compute[247421]: 2026-01-26 18:39:12.817 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:13.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:13.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:13 np0005596060 nova_compute[247421]: 2026-01-26 18:39:13.914 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:39:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:39:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:39:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:39:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:39:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:39:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:14.765 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:14.766 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:14.766 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 88 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Jan 26 13:39:15 np0005596060 nova_compute[247421]: 2026-01-26 18:39:15.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:39:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:15.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:15.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 88 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 135 op/s
Jan 26 13:39:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:17.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:17 np0005596060 nova_compute[247421]: 2026-01-26 18:39:17.822 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:17.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 88 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Jan 26 13:39:18 np0005596060 nova_compute[247421]: 2026-01-26 18:39:18.915 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:19.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:19.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 88 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 599 KiB/s wr, 97 op/s
Jan 26 13:39:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:21 np0005596060 nova_compute[247421]: 2026-01-26 18:39:21.647 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:39:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:21.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:22 np0005596060 nova_compute[247421]: 2026-01-26 18:39:22.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:39:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 93 MiB data, 388 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 108 op/s
Jan 26 13:39:22 np0005596060 nova_compute[247421]: 2026-01-26 18:39:22.821 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:23 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:23Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:38:51:0d 10.100.0.11
Jan 26 13:39:23 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:23Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:38:51:0d 10.100.0.11
Jan 26 13:39:23 np0005596060 nova_compute[247421]: 2026-01-26 18:39:23.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:39:23 np0005596060 nova_compute[247421]: 2026-01-26 18:39:23.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:39:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:23.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:39:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:23.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:39:23 np0005596060 nova_compute[247421]: 2026-01-26 18:39:23.917 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 99 MiB data, 392 MiB used, 21 GiB / 21 GiB avail; 988 KiB/s rd, 1.2 MiB/s wr, 55 op/s
Jan 26 13:39:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:25.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:25.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:26 np0005596060 nova_compute[247421]: 2026-01-26 18:39:26.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:39:26 np0005596060 podman[297043]: 2026-01-26 18:39:26.786024778 +0000 UTC m=+0.051677281 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 13:39:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 114 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 550 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Jan 26 13:39:26 np0005596060 podman[297044]: 2026-01-26 18:39:26.840427468 +0000 UTC m=+0.105231810 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 13:39:27 np0005596060 nova_compute[247421]: 2026-01-26 18:39:27.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:39:27 np0005596060 nova_compute[247421]: 2026-01-26 18:39:27.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:39:27 np0005596060 nova_compute[247421]: 2026-01-26 18:39:27.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:39:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:27.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:27 np0005596060 nova_compute[247421]: 2026-01-26 18:39:27.824 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:27.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:28 np0005596060 nova_compute[247421]: 2026-01-26 18:39:28.360 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:39:28 np0005596060 nova_compute[247421]: 2026-01-26 18:39:28.360 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquired lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:39:28 np0005596060 nova_compute[247421]: 2026-01-26 18:39:28.360 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 26 13:39:28 np0005596060 nova_compute[247421]: 2026-01-26 18:39:28.361 247428 DEBUG nova.objects.instance [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:39:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 13:39:28 np0005596060 nova_compute[247421]: 2026-01-26 18:39:28.920 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:29.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:29.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.789 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Updating instance_info_cache with network_info: [{"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:39:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.861 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Releasing lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.862 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.862 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.862 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.862 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.886 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.887 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.887 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.887 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:39:30 np0005596060 nova_compute[247421]: 2026-01-26 18:39:30.888 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.080 247428 INFO nova.compute.manager [None req-dad86f47-9f99-424c-88c7-5e2f73cc52cb ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Get console output#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.087 285734 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 26 13:39:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:39:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106636514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.331 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.397 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.398 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.549 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.550 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4450MB free_disk=20.942890167236328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.550 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.550 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.619 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 766b2be2-d46f-4f27-ad07-a91017eaddaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.621 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.621 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:39:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:31.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:31 np0005596060 nova_compute[247421]: 2026-01-26 18:39:31.711 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:31.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:39:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3568038826' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.171 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.177 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.193 247428 DEBUG nova.compute.manager [req-24cae615-6e8d-48ef-b0a4-39a4c5e33754 req-383ad7a9-7c29-4ab3-8ac7-9fdc6f95ce87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-changed-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.194 247428 DEBUG nova.compute.manager [req-24cae615-6e8d-48ef-b0a4-39a4c5e33754 req-383ad7a9-7c29-4ab3-8ac7-9fdc6f95ce87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Refreshing instance network info cache due to event network-changed-17131365-352f-497d-ae25-1813ee58134e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.194 247428 DEBUG oslo_concurrency.lockutils [req-24cae615-6e8d-48ef-b0a4-39a4c5e33754 req-383ad7a9-7c29-4ab3-8ac7-9fdc6f95ce87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.194 247428 DEBUG oslo_concurrency.lockutils [req-24cae615-6e8d-48ef-b0a4-39a4c5e33754 req-383ad7a9-7c29-4ab3-8ac7-9fdc6f95ce87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.195 247428 DEBUG nova.network.neutron [req-24cae615-6e8d-48ef-b0a4-39a4c5e33754 req-383ad7a9-7c29-4ab3-8ac7-9fdc6f95ce87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Refreshing network info cache for port 17131365-352f-497d-ae25-1813ee58134e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.202 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.203 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.203 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.204 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.218 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.252 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.252 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.260 247428 DEBUG oslo_concurrency.lockutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.261 247428 DEBUG oslo_concurrency.lockutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.261 247428 DEBUG oslo_concurrency.lockutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.261 247428 DEBUG oslo_concurrency.lockutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.262 247428 DEBUG oslo_concurrency.lockutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.263 247428 INFO nova.compute.manager [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Terminating instance#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.264 247428 DEBUG nova.compute.manager [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:39:32 np0005596060 kernel: tap17131365-35 (unregistering): left promiscuous mode
Jan 26 13:39:32 np0005596060 NetworkManager[48900]: <info>  [1769452772.3808] device (tap17131365-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:39:32 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:32Z|00163|binding|INFO|Releasing lport 17131365-352f-497d-ae25-1813ee58134e from this chassis (sb_readonly=0)
Jan 26 13:39:32 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:32Z|00164|binding|INFO|Setting lport 17131365-352f-497d-ae25-1813ee58134e down in Southbound
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.387 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 ovn_controller[148842]: 2026-01-26T18:39:32Z|00165|binding|INFO|Removing iface tap17131365-35 ovn-installed in OVS
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.389 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.394 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:38:51:0d 10.100.0.11'], port_security=['fa:16:3e:38:51:0d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '766b2be2-d46f-4f27-ad07-a91017eaddaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68e765b7-d298-406e-a6ab-78affbd0449b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7c9f9a1a-237f-40ca-bc88-5b27d64f4698', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89adc109-8ed5-4ab7-a474-13d16cdb85c5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=17131365-352f-497d-ae25-1813ee58134e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.395 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 17131365-352f-497d-ae25-1813ee58134e in datapath 68e765b7-d298-406e-a6ab-78affbd0449b unbound from our chassis#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.396 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 68e765b7-d298-406e-a6ab-78affbd0449b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.398 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ee61bc09-71dd-4f01-9583-337f39b27e43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.398 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b namespace which is not needed anymore#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.405 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 26 13:39:32 np0005596060 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001a.scope: Consumed 13.859s CPU time.
Jan 26 13:39:32 np0005596060 systemd-machined[213879]: Machine qemu-13-instance-0000001a terminated.
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.486 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.492 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.500 247428 INFO nova.virt.libvirt.driver [-] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Instance destroyed successfully.#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.501 247428 DEBUG nova.objects.instance [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'resources' on Instance uuid 766b2be2-d46f-4f27-ad07-a91017eaddaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.516 247428 DEBUG nova.virt.libvirt.vif [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T18:38:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1438735807',display_name='tempest-TestNetworkAdvancedServerOps-server-1438735807',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1438735807',id=26,image_ref='be7b1750-5d13-441e-bf97-67d885906c42',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAALh3ktKNgqM5WA8+0ghvMu82v6ROKkLY/BBhswRNrRrEsHBxjp4xByWPJgWh4j6nVL/yJ/7mkDqldjlWcflbeIcqPnHu6K9XLmvuErFpXgdr3/i5QNkrsiNow1Xs/Y3g==',key_name='tempest-TestNetworkAdvancedServerOps-1982445564',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:39:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-r97sgvh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='be7b1750-5d13-441e-bf97-67d885906c42',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:39:11Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=766b2be2-d46f-4f27-ad07-a91017eaddaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.517 247428 DEBUG nova.network.os_vif_util [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.518 247428 DEBUG nova.network.os_vif_util [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.518 247428 DEBUG os_vif [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.521 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.521 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap17131365-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.522 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.525 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.527 247428 INFO os_vif [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:38:51:0d,bridge_name='br-int',has_traffic_filtering=True,id=17131365-352f-497d-ae25-1813ee58134e,network=Network(68e765b7-d298-406e-a6ab-78affbd0449b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap17131365-35')#033[00m
Jan 26 13:39:32 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296967]: [NOTICE]   (296972) : haproxy version is 2.8.14-c23fe91
Jan 26 13:39:32 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296967]: [NOTICE]   (296972) : path to executable is /usr/sbin/haproxy
Jan 26 13:39:32 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296967]: [WARNING]  (296972) : Exiting Master process...
Jan 26 13:39:32 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296967]: [ALERT]    (296972) : Current worker (296975) exited with code 143 (Terminated)
Jan 26 13:39:32 np0005596060 neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b[296967]: [WARNING]  (296972) : All workers exited. Exiting... (0)
Jan 26 13:39:32 np0005596060 systemd[1]: libpod-7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a.scope: Deactivated successfully.
Jan 26 13:39:32 np0005596060 podman[297163]: 2026-01-26 18:39:32.55387791 +0000 UTC m=+0.048005120 container died 7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:39:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a-userdata-shm.mount: Deactivated successfully.
Jan 26 13:39:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-990893bf38d45517bc80c22508b10c9449e73d6f9e09a914efde4c5d7693874b-merged.mount: Deactivated successfully.
Jan 26 13:39:32 np0005596060 podman[297163]: 2026-01-26 18:39:32.617608663 +0000 UTC m=+0.111735873 container cleanup 7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 13:39:32 np0005596060 systemd[1]: libpod-conmon-7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a.scope: Deactivated successfully.
Jan 26 13:39:32 np0005596060 podman[297216]: 2026-01-26 18:39:32.679829789 +0000 UTC m=+0.040801367 container remove 7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.686 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a55a0c-993a-4ef7-9179-a906bb3678f9]: (4, ('Mon Jan 26 06:39:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b (7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a)\n7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a\nMon Jan 26 06:39:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b (7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a)\n7a45f058277d673d8bb0c398a9b61a47de7840e8335279fe141cbce6aeaa4f9a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.688 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ecdade2f-49c0-453f-992f-cd7ed999f47c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.689 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68e765b7-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:39:32 np0005596060 kernel: tap68e765b7-d0: left promiscuous mode
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.691 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.693 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.695 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[dfebb4a5-3f5e-4eec-864c-c56d14d10cab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.704 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.715 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d52c56-a98a-4ea4-be0d-6a77c1e919be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.716 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5f180a-e188-4bda-b9ee-3a6b7fbcd244]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.732 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[8e22a446-5056-48f2-89ba-b8cf41d04535]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651456, 'reachable_time': 38663, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297230, 'error': None, 'target': 'ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:32 np0005596060 systemd[1]: run-netns-ovnmeta\x2d68e765b7\x2dd298\x2d406e\x2da6ab\x2d78affbd0449b.mount: Deactivated successfully.
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.737 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-68e765b7-d298-406e-a6ab-78affbd0449b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:39:32 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:39:32.737 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[9045b450-76cd-4947-9377-ad18a7109780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:39:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:39:32 np0005596060 nova_compute[247421]: 2026-01-26 18:39:32.826 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:33 np0005596060 nova_compute[247421]: 2026-01-26 18:39:33.306 247428 INFO nova.virt.libvirt.driver [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Deleting instance files /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf_del#033[00m
Jan 26 13:39:33 np0005596060 nova_compute[247421]: 2026-01-26 18:39:33.307 247428 INFO nova.virt.libvirt.driver [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Deletion of /var/lib/nova/instances/766b2be2-d46f-4f27-ad07-a91017eaddaf_del complete#033[00m
Jan 26 13:39:33 np0005596060 nova_compute[247421]: 2026-01-26 18:39:33.357 247428 INFO nova.compute.manager [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Took 1.09 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:39:33 np0005596060 nova_compute[247421]: 2026-01-26 18:39:33.358 247428 DEBUG oslo.service.loopingcall [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:39:33 np0005596060 nova_compute[247421]: 2026-01-26 18:39:33.358 247428 DEBUG nova.compute.manager [-] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:39:33 np0005596060 nova_compute[247421]: 2026-01-26 18:39:33.358 247428 DEBUG nova.network.neutron [-] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:39:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:33.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:33.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.279 247428 DEBUG nova.compute.manager [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-unplugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.279 247428 DEBUG oslo_concurrency.lockutils [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.280 247428 DEBUG oslo_concurrency.lockutils [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.280 247428 DEBUG oslo_concurrency.lockutils [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.280 247428 DEBUG nova.compute.manager [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-unplugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.280 247428 DEBUG nova.compute.manager [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-unplugged-17131365-352f-497d-ae25-1813ee58134e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.280 247428 DEBUG nova.compute.manager [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.280 247428 DEBUG oslo_concurrency.lockutils [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.281 247428 DEBUG oslo_concurrency.lockutils [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.281 247428 DEBUG oslo_concurrency.lockutils [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.281 247428 DEBUG nova.compute.manager [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] No waiting events found dispatching network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.281 247428 WARNING nova.compute.manager [req-34c911c3-513a-424a-a843-82483406f6e7 req-9274623b-8e78-4d07-9b28-3e810323a9e6 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received unexpected event network-vif-plugged-17131365-352f-497d-ae25-1813ee58134e for instance with vm_state active and task_state deleting.#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.386 247428 DEBUG nova.network.neutron [req-24cae615-6e8d-48ef-b0a4-39a4c5e33754 req-383ad7a9-7c29-4ab3-8ac7-9fdc6f95ce87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Updated VIF entry in instance network info cache for port 17131365-352f-497d-ae25-1813ee58134e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.386 247428 DEBUG nova.network.neutron [req-24cae615-6e8d-48ef-b0a4-39a4c5e33754 req-383ad7a9-7c29-4ab3-8ac7-9fdc6f95ce87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Updating instance_info_cache with network_info: [{"id": "17131365-352f-497d-ae25-1813ee58134e", "address": "fa:16:3e:38:51:0d", "network": {"id": "68e765b7-d298-406e-a6ab-78affbd0449b", "bridge": "br-int", "label": "tempest-network-smoke--1850015655", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap17131365-35", "ovs_interfaceid": "17131365-352f-497d-ae25-1813ee58134e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:39:34 np0005596060 nova_compute[247421]: 2026-01-26 18:39:34.406 247428 DEBUG oslo_concurrency.lockutils [req-24cae615-6e8d-48ef-b0a4-39a4c5e33754 req-383ad7a9-7c29-4ab3-8ac7-9fdc6f95ce87 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-766b2be2-d46f-4f27-ad07-a91017eaddaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:39:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 101 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 247 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Jan 26 13:39:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:35.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:35 np0005596060 nova_compute[247421]: 2026-01-26 18:39:35.888 247428 DEBUG nova.network.neutron [-] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:39:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:35.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:35 np0005596060 nova_compute[247421]: 2026-01-26 18:39:35.911 247428 INFO nova.compute.manager [-] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Took 2.55 seconds to deallocate network for instance.#033[00m
Jan 26 13:39:35 np0005596060 nova_compute[247421]: 2026-01-26 18:39:35.958 247428 DEBUG oslo_concurrency.lockutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:39:35 np0005596060 nova_compute[247421]: 2026-01-26 18:39:35.959 247428 DEBUG oslo_concurrency.lockutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:39:35 np0005596060 nova_compute[247421]: 2026-01-26 18:39:35.970 247428 DEBUG nova.compute.manager [req-48fa061c-21ac-48fb-9bab-1acc4ea04555 req-fbcaf1e3-0d3c-46d7-8671-20c2e58f0fab 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Received event network-vif-deleted-17131365-352f-497d-ae25-1813ee58134e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:39:36 np0005596060 nova_compute[247421]: 2026-01-26 18:39:36.004 247428 DEBUG oslo_concurrency.processutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:39:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:39:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788351261' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:39:36 np0005596060 nova_compute[247421]: 2026-01-26 18:39:36.514 247428 DEBUG oslo_concurrency.processutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:39:36 np0005596060 nova_compute[247421]: 2026-01-26 18:39:36.524 247428 DEBUG nova.compute.provider_tree [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:39:36 np0005596060 nova_compute[247421]: 2026-01-26 18:39:36.543 247428 DEBUG nova.scheduler.client.report [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:39:36 np0005596060 nova_compute[247421]: 2026-01-26 18:39:36.567 247428 DEBUG oslo_concurrency.lockutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:36 np0005596060 nova_compute[247421]: 2026-01-26 18:39:36.591 247428 INFO nova.scheduler.client.report [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Deleted allocations for instance 766b2be2-d46f-4f27-ad07-a91017eaddaf#033[00m
Jan 26 13:39:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:36 np0005596060 nova_compute[247421]: 2026-01-26 18:39:36.653 247428 DEBUG oslo_concurrency.lockutils [None req-b19b69c5-d0b7-4682-b46a-8febab00f848 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "766b2be2-d46f-4f27-ad07-a91017eaddaf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:39:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 70 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 205 KiB/s rd, 971 KiB/s wr, 62 op/s
Jan 26 13:39:37 np0005596060 podman[297428]: 2026-01-26 18:39:37.423905737 +0000 UTC m=+0.068753452 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:39:37 np0005596060 podman[297428]: 2026-01-26 18:39:37.514081606 +0000 UTC m=+0.158929341 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 13:39:37 np0005596060 nova_compute[247421]: 2026-01-26 18:39:37.523 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:37.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:37 np0005596060 nova_compute[247421]: 2026-01-26 18:39:37.827 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:37.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:39:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:39:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:38 np0005596060 podman[297580]: 2026-01-26 18:39:38.400391613 +0000 UTC m=+0.158518211 container exec e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 13:39:38 np0005596060 podman[297580]: 2026-01-26 18:39:38.411481772 +0000 UTC m=+0.169608340 container exec_died e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 13:39:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:38 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 181 KiB/s rd, 53 KiB/s wr, 51 op/s
Jan 26 13:39:38 np0005596060 podman[297644]: 2026-01-26 18:39:38.824016524 +0000 UTC m=+0.254166737 container exec 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, io.openshift.expose-services=, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vcs-type=git, description=keepalived for Ceph, vendor=Red Hat, Inc., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2)
Jan 26 13:39:38 np0005596060 podman[297664]: 2026-01-26 18:39:38.917456535 +0000 UTC m=+0.067538710 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, distribution-scope=public)
Jan 26 13:39:38 np0005596060 podman[297644]: 2026-01-26 18:39:38.994675319 +0000 UTC m=+0.424825512 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, version=2.2.4, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, name=keepalived, description=keepalived for Ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public)
Jan 26 13:39:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:39:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:39:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:39 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 13:39:39 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 13:39:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:39.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:39:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:39:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:39:39 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:39:39 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:39:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:40 np0005596060 nova_compute[247421]: 2026-01-26 18:39:40.154 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:40 np0005596060 nova_compute[247421]: 2026-01-26 18:39:40.232 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 85e58a23-c9f6-4f8b-9e3c-fae9a05f71d8 does not exist
Jan 26 13:39:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9abd01c8-7a5f-4758-8dce-65baa4dbf694 does not exist
Jan 26 13:39:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5bea8273-9e9d-43a1-9b6b-698ee9df407c does not exist
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.293409) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452780293481, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 1941, "num_deletes": 252, "total_data_size": 3710120, "memory_usage": 3770800, "flush_reason": "Manual Compaction"}
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452780319101, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 3645241, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41583, "largest_seqno": 43523, "table_properties": {"data_size": 3636742, "index_size": 5121, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18897, "raw_average_key_size": 20, "raw_value_size": 3619119, "raw_average_value_size": 3942, "num_data_blocks": 223, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769452601, "oldest_key_time": 1769452601, "file_creation_time": 1769452780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 25758 microseconds, and 7710 cpu microseconds.
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.319163) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 3645241 bytes OK
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.319207) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.327866) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.327923) EVENT_LOG_v1 {"time_micros": 1769452780327911, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.327984) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3702089, prev total WAL file size 3702089, number of live WAL files 2.
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.329256) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(3559KB)], [92(10MB)]
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452780329335, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 14605291, "oldest_snapshot_seqno": -1}
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6927 keys, 12579930 bytes, temperature: kUnknown
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452780408831, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 12579930, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12532339, "index_size": 29175, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 178426, "raw_average_key_size": 25, "raw_value_size": 12406619, "raw_average_value_size": 1791, "num_data_blocks": 1167, "num_entries": 6927, "num_filter_entries": 6927, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769452780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.409099) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 12579930 bytes
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.410501) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.5 rd, 158.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.5 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 7456, records dropped: 529 output_compression: NoCompression
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.410565) EVENT_LOG_v1 {"time_micros": 1769452780410511, "job": 54, "event": "compaction_finished", "compaction_time_micros": 79607, "compaction_time_cpu_micros": 28344, "output_level": 6, "num_output_files": 1, "total_output_size": 12579930, "num_input_records": 7456, "num_output_records": 6927, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452780411488, "job": 54, "event": "table_file_deletion", "file_number": 94}
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452780413718, "job": 54, "event": "table_file_deletion", "file_number": 92}
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.329158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.413822) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.413830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.413831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.413833) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:39:40 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:39:40.413834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:39:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 26 13:39:40 np0005596060 podman[297949]: 2026-01-26 18:39:40.899295693 +0000 UTC m=+0.086242352 container create 9303674e5f9c37576c72ae28a2ba4e5cbbc525c7a67f7564a6e0f05379c9dd4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 13:39:40 np0005596060 podman[297949]: 2026-01-26 18:39:40.836536774 +0000 UTC m=+0.023483453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:39:40 np0005596060 systemd[1]: Started libpod-conmon-9303674e5f9c37576c72ae28a2ba4e5cbbc525c7a67f7564a6e0f05379c9dd4d.scope.
Jan 26 13:39:40 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:39:41 np0005596060 podman[297949]: 2026-01-26 18:39:41.020051412 +0000 UTC m=+0.206998101 container init 9303674e5f9c37576c72ae28a2ba4e5cbbc525c7a67f7564a6e0f05379c9dd4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:39:41 np0005596060 podman[297949]: 2026-01-26 18:39:41.029381637 +0000 UTC m=+0.216328296 container start 9303674e5f9c37576c72ae28a2ba4e5cbbc525c7a67f7564a6e0f05379c9dd4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:39:41 np0005596060 podman[297949]: 2026-01-26 18:39:41.034065305 +0000 UTC m=+0.221011964 container attach 9303674e5f9c37576c72ae28a2ba4e5cbbc525c7a67f7564a6e0f05379c9dd4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:39:41 np0005596060 laughing_sammet[297966]: 167 167
Jan 26 13:39:41 np0005596060 systemd[1]: libpod-9303674e5f9c37576c72ae28a2ba4e5cbbc525c7a67f7564a6e0f05379c9dd4d.scope: Deactivated successfully.
Jan 26 13:39:41 np0005596060 podman[297949]: 2026-01-26 18:39:41.036689971 +0000 UTC m=+0.223636630 container died 9303674e5f9c37576c72ae28a2ba4e5cbbc525c7a67f7564a6e0f05379c9dd4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:39:41 np0005596060 systemd[1]: var-lib-containers-storage-overlay-be0cd45e7bc6d0eb507809c6c50f0afe8ed9b90559d9a453791d97115cc79c03-merged.mount: Deactivated successfully.
Jan 26 13:39:41 np0005596060 podman[297949]: 2026-01-26 18:39:41.149930131 +0000 UTC m=+0.336876790 container remove 9303674e5f9c37576c72ae28a2ba4e5cbbc525c7a67f7564a6e0f05379c9dd4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_sammet, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:39:41 np0005596060 systemd[1]: libpod-conmon-9303674e5f9c37576c72ae28a2ba4e5cbbc525c7a67f7564a6e0f05379c9dd4d.scope: Deactivated successfully.
Jan 26 13:39:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:41 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:39:41 np0005596060 podman[297990]: 2026-01-26 18:39:41.319002716 +0000 UTC m=+0.046126512 container create be530916994f14bce9641b5582eb56368111a52983a6e71eff8944f84c8c6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_colden, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:39:41 np0005596060 podman[297990]: 2026-01-26 18:39:41.297954126 +0000 UTC m=+0.025077942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:39:41 np0005596060 systemd[1]: Started libpod-conmon-be530916994f14bce9641b5582eb56368111a52983a6e71eff8944f84c8c6ab7.scope.
Jan 26 13:39:41 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:39:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e19db4e0b35104ef4acefde226fdf85b2020db6225aa95dbf32dd0abdaa0d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e19db4e0b35104ef4acefde226fdf85b2020db6225aa95dbf32dd0abdaa0d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e19db4e0b35104ef4acefde226fdf85b2020db6225aa95dbf32dd0abdaa0d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e19db4e0b35104ef4acefde226fdf85b2020db6225aa95dbf32dd0abdaa0d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:41 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74e19db4e0b35104ef4acefde226fdf85b2020db6225aa95dbf32dd0abdaa0d0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:41 np0005596060 podman[297990]: 2026-01-26 18:39:41.540934181 +0000 UTC m=+0.268057987 container init be530916994f14bce9641b5582eb56368111a52983a6e71eff8944f84c8c6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_colden, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 13:39:41 np0005596060 podman[297990]: 2026-01-26 18:39:41.549258541 +0000 UTC m=+0.276382377 container start be530916994f14bce9641b5582eb56368111a52983a6e71eff8944f84c8c6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 26 13:39:41 np0005596060 podman[297990]: 2026-01-26 18:39:41.55360919 +0000 UTC m=+0.280732996 container attach be530916994f14bce9641b5582eb56368111a52983a6e71eff8944f84c8c6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:39:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:41.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:42 np0005596060 naughty_colden[298006]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:39:42 np0005596060 naughty_colden[298006]: --> relative data size: 1.0
Jan 26 13:39:42 np0005596060 naughty_colden[298006]: --> All data devices are unavailable
Jan 26 13:39:42 np0005596060 systemd[1]: libpod-be530916994f14bce9641b5582eb56368111a52983a6e71eff8944f84c8c6ab7.scope: Deactivated successfully.
Jan 26 13:39:42 np0005596060 podman[297990]: 2026-01-26 18:39:42.446508302 +0000 UTC m=+1.173632118 container died be530916994f14bce9641b5582eb56368111a52983a6e71eff8944f84c8c6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_colden, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:39:42 np0005596060 systemd[1]: var-lib-containers-storage-overlay-74e19db4e0b35104ef4acefde226fdf85b2020db6225aa95dbf32dd0abdaa0d0-merged.mount: Deactivated successfully.
Jan 26 13:39:42 np0005596060 podman[297990]: 2026-01-26 18:39:42.518747879 +0000 UTC m=+1.245871675 container remove be530916994f14bce9641b5582eb56368111a52983a6e71eff8944f84c8c6ab7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 13:39:42 np0005596060 systemd[1]: libpod-conmon-be530916994f14bce9641b5582eb56368111a52983a6e71eff8944f84c8c6ab7.scope: Deactivated successfully.
Jan 26 13:39:42 np0005596060 nova_compute[247421]: 2026-01-26 18:39:42.530 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 28 op/s
Jan 26 13:39:42 np0005596060 nova_compute[247421]: 2026-01-26 18:39:42.829 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:43 np0005596060 podman[298227]: 2026-01-26 18:39:43.121226442 +0000 UTC m=+0.041681250 container create 590efb010324c3444b1aa23caeea620d81405e4444fafac990bade13dac27d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 26 13:39:43 np0005596060 systemd[1]: Started libpod-conmon-590efb010324c3444b1aa23caeea620d81405e4444fafac990bade13dac27d01.scope.
Jan 26 13:39:43 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:39:43 np0005596060 podman[298227]: 2026-01-26 18:39:43.194760153 +0000 UTC m=+0.115214961 container init 590efb010324c3444b1aa23caeea620d81405e4444fafac990bade13dac27d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 13:39:43 np0005596060 podman[298227]: 2026-01-26 18:39:43.104329727 +0000 UTC m=+0.024784555 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:39:43 np0005596060 podman[298227]: 2026-01-26 18:39:43.20458437 +0000 UTC m=+0.125039178 container start 590efb010324c3444b1aa23caeea620d81405e4444fafac990bade13dac27d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 13:39:43 np0005596060 podman[298227]: 2026-01-26 18:39:43.208003176 +0000 UTC m=+0.128458004 container attach 590efb010324c3444b1aa23caeea620d81405e4444fafac990bade13dac27d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hofstadter, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:39:43 np0005596060 strange_hofstadter[298244]: 167 167
Jan 26 13:39:43 np0005596060 systemd[1]: libpod-590efb010324c3444b1aa23caeea620d81405e4444fafac990bade13dac27d01.scope: Deactivated successfully.
Jan 26 13:39:43 np0005596060 podman[298227]: 2026-01-26 18:39:43.210113279 +0000 UTC m=+0.130568117 container died 590efb010324c3444b1aa23caeea620d81405e4444fafac990bade13dac27d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hofstadter, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:39:43 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a8b261ed61c9811137dabc2e2c2c1f6ab1b0bc72ccb5f57bd8318e084b0589c8-merged.mount: Deactivated successfully.
Jan 26 13:39:43 np0005596060 podman[298227]: 2026-01-26 18:39:43.251025809 +0000 UTC m=+0.171480617 container remove 590efb010324c3444b1aa23caeea620d81405e4444fafac990bade13dac27d01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:39:43 np0005596060 systemd[1]: libpod-conmon-590efb010324c3444b1aa23caeea620d81405e4444fafac990bade13dac27d01.scope: Deactivated successfully.
Jan 26 13:39:43 np0005596060 podman[298266]: 2026-01-26 18:39:43.427636664 +0000 UTC m=+0.051899457 container create f499c452d931c18ea291ec3ca261c63cb4cec87ac88f31851a57a2d0dab59ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_diffie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:39:43 np0005596060 systemd[1]: Started libpod-conmon-f499c452d931c18ea291ec3ca261c63cb4cec87ac88f31851a57a2d0dab59ef9.scope.
Jan 26 13:39:43 np0005596060 podman[298266]: 2026-01-26 18:39:43.402553543 +0000 UTC m=+0.026816316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:39:43 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:39:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a2f6ef8fe04c972f0221e344c8dfcd210c958672280e316e4b6ea4fdffdd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a2f6ef8fe04c972f0221e344c8dfcd210c958672280e316e4b6ea4fdffdd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a2f6ef8fe04c972f0221e344c8dfcd210c958672280e316e4b6ea4fdffdd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:43 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/937a2f6ef8fe04c972f0221e344c8dfcd210c958672280e316e4b6ea4fdffdd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:43 np0005596060 podman[298266]: 2026-01-26 18:39:43.543666315 +0000 UTC m=+0.167929108 container init f499c452d931c18ea291ec3ca261c63cb4cec87ac88f31851a57a2d0dab59ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 13:39:43 np0005596060 podman[298266]: 2026-01-26 18:39:43.552111317 +0000 UTC m=+0.176374090 container start f499c452d931c18ea291ec3ca261c63cb4cec87ac88f31851a57a2d0dab59ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 13:39:43 np0005596060 podman[298266]: 2026-01-26 18:39:43.555125313 +0000 UTC m=+0.179388086 container attach f499c452d931c18ea291ec3ca261c63cb4cec87ac88f31851a57a2d0dab59ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:39:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:43.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:43.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:39:44
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'vms', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', 'images']
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]: {
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:    "1": [
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:        {
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "devices": [
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "/dev/loop3"
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            ],
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "lv_name": "ceph_lv0",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "lv_size": "7511998464",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "name": "ceph_lv0",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "tags": {
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.cluster_name": "ceph",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.crush_device_class": "",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.encrypted": "0",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.osd_id": "1",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.type": "block",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:                "ceph.vdo": "0"
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            },
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "type": "block",
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:            "vg_name": "ceph_vg0"
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:        }
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]:    ]
Jan 26 13:39:44 np0005596060 hungry_diffie[298283]: }
Jan 26 13:39:44 np0005596060 systemd[1]: libpod-f499c452d931c18ea291ec3ca261c63cb4cec87ac88f31851a57a2d0dab59ef9.scope: Deactivated successfully.
Jan 26 13:39:44 np0005596060 podman[298266]: 2026-01-26 18:39:44.321635664 +0000 UTC m=+0.945898457 container died f499c452d931c18ea291ec3ca261c63cb4cec87ac88f31851a57a2d0dab59ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_diffie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:39:44 np0005596060 systemd[1]: var-lib-containers-storage-overlay-937a2f6ef8fe04c972f0221e344c8dfcd210c958672280e316e4b6ea4fdffdd6-merged.mount: Deactivated successfully.
Jan 26 13:39:44 np0005596060 podman[298266]: 2026-01-26 18:39:44.38067529 +0000 UTC m=+1.004938063 container remove f499c452d931c18ea291ec3ca261c63cb4cec87ac88f31851a57a2d0dab59ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:39:44 np0005596060 systemd[1]: libpod-conmon-f499c452d931c18ea291ec3ca261c63cb4cec87ac88f31851a57a2d0dab59ef9.scope: Deactivated successfully.
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:39:44 np0005596060 ceph-mgr[74563]: client.0 ms_handle_reset on v2:192.168.122.100:6800/2716354406
Jan 26 13:39:45 np0005596060 podman[298445]: 2026-01-26 18:39:45.003167187 +0000 UTC m=+0.037090934 container create 58357bc5b1246eb6b926ec709b525ddb02dec5e590bcd594814396789ec46aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_murdock, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:39:45 np0005596060 systemd[1]: Started libpod-conmon-58357bc5b1246eb6b926ec709b525ddb02dec5e590bcd594814396789ec46aa7.scope.
Jan 26 13:39:45 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:39:45 np0005596060 podman[298445]: 2026-01-26 18:39:44.987013861 +0000 UTC m=+0.020937508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:39:45 np0005596060 podman[298445]: 2026-01-26 18:39:45.086598707 +0000 UTC m=+0.120522364 container init 58357bc5b1246eb6b926ec709b525ddb02dec5e590bcd594814396789ec46aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_murdock, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:39:45 np0005596060 podman[298445]: 2026-01-26 18:39:45.09386315 +0000 UTC m=+0.127786777 container start 58357bc5b1246eb6b926ec709b525ddb02dec5e590bcd594814396789ec46aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 13:39:45 np0005596060 wizardly_murdock[298461]: 167 167
Jan 26 13:39:45 np0005596060 systemd[1]: libpod-58357bc5b1246eb6b926ec709b525ddb02dec5e590bcd594814396789ec46aa7.scope: Deactivated successfully.
Jan 26 13:39:45 np0005596060 podman[298445]: 2026-01-26 18:39:45.102360244 +0000 UTC m=+0.136283921 container attach 58357bc5b1246eb6b926ec709b525ddb02dec5e590bcd594814396789ec46aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 13:39:45 np0005596060 podman[298445]: 2026-01-26 18:39:45.103582885 +0000 UTC m=+0.137506532 container died 58357bc5b1246eb6b926ec709b525ddb02dec5e590bcd594814396789ec46aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:39:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ff004f163f38646be61d35e350bf7a566882fad404c03e533560bfbbc216533b-merged.mount: Deactivated successfully.
Jan 26 13:39:45 np0005596060 podman[298445]: 2026-01-26 18:39:45.146090864 +0000 UTC m=+0.180014491 container remove 58357bc5b1246eb6b926ec709b525ddb02dec5e590bcd594814396789ec46aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:39:45 np0005596060 systemd[1]: libpod-conmon-58357bc5b1246eb6b926ec709b525ddb02dec5e590bcd594814396789ec46aa7.scope: Deactivated successfully.
Jan 26 13:39:45 np0005596060 podman[298486]: 2026-01-26 18:39:45.309052246 +0000 UTC m=+0.044727597 container create 3f4bdfb0af15fb738a191bcd64254b434bcc37bb9d452df020979bb7c72eba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:39:45 np0005596060 systemd[1]: Started libpod-conmon-3f4bdfb0af15fb738a191bcd64254b434bcc37bb9d452df020979bb7c72eba91.scope.
Jan 26 13:39:45 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:39:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6052b099292713bb6703a41907af9d7c5fe2dd168f84378927b914cad683faa4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6052b099292713bb6703a41907af9d7c5fe2dd168f84378927b914cad683faa4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6052b099292713bb6703a41907af9d7c5fe2dd168f84378927b914cad683faa4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:45 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6052b099292713bb6703a41907af9d7c5fe2dd168f84378927b914cad683faa4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:39:45 np0005596060 podman[298486]: 2026-01-26 18:39:45.291315229 +0000 UTC m=+0.026990600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:39:45 np0005596060 podman[298486]: 2026-01-26 18:39:45.394931688 +0000 UTC m=+0.130607039 container init 3f4bdfb0af15fb738a191bcd64254b434bcc37bb9d452df020979bb7c72eba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_colden, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:39:45 np0005596060 podman[298486]: 2026-01-26 18:39:45.401329449 +0000 UTC m=+0.137004790 container start 3f4bdfb0af15fb738a191bcd64254b434bcc37bb9d452df020979bb7c72eba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_colden, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:39:45 np0005596060 podman[298486]: 2026-01-26 18:39:45.404324314 +0000 UTC m=+0.139999655 container attach 3f4bdfb0af15fb738a191bcd64254b434bcc37bb9d452df020979bb7c72eba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:39:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:45.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:46 np0005596060 clever_colden[298503]: {
Jan 26 13:39:46 np0005596060 clever_colden[298503]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:39:46 np0005596060 clever_colden[298503]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:39:46 np0005596060 clever_colden[298503]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:39:46 np0005596060 clever_colden[298503]:        "osd_id": 1,
Jan 26 13:39:46 np0005596060 clever_colden[298503]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:39:46 np0005596060 clever_colden[298503]:        "type": "bluestore"
Jan 26 13:39:46 np0005596060 clever_colden[298503]:    }
Jan 26 13:39:46 np0005596060 clever_colden[298503]: }
Jan 26 13:39:46 np0005596060 systemd[1]: libpod-3f4bdfb0af15fb738a191bcd64254b434bcc37bb9d452df020979bb7c72eba91.scope: Deactivated successfully.
Jan 26 13:39:46 np0005596060 podman[298486]: 2026-01-26 18:39:46.290342312 +0000 UTC m=+1.026017653 container died 3f4bdfb0af15fb738a191bcd64254b434bcc37bb9d452df020979bb7c72eba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:39:46 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6052b099292713bb6703a41907af9d7c5fe2dd168f84378927b914cad683faa4-merged.mount: Deactivated successfully.
Jan 26 13:39:46 np0005596060 podman[298486]: 2026-01-26 18:39:46.598422915 +0000 UTC m=+1.334098256 container remove 3f4bdfb0af15fb738a191bcd64254b434bcc37bb9d452df020979bb7c72eba91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_colden, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:39:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:39:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:46 np0005596060 systemd[1]: libpod-conmon-3f4bdfb0af15fb738a191bcd64254b434bcc37bb9d452df020979bb7c72eba91.scope: Deactivated successfully.
Jan 26 13:39:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:39:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:46 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev abdc01e8-76e9-44d3-aa6f-b4db25135d3d does not exist
Jan 26 13:39:46 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d6f7af60-2018-45b7-98d6-9925c617d281 does not exist
Jan 26 13:39:46 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8e541890-1e13-4657-ae8b-0373205e4fd1 does not exist
Jan 26 13:39:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Jan 26 13:39:47 np0005596060 nova_compute[247421]: 2026-01-26 18:39:47.498 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769452772.4965262, 766b2be2-d46f-4f27-ad07-a91017eaddaf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:39:47 np0005596060 nova_compute[247421]: 2026-01-26 18:39:47.499 247428 INFO nova.compute.manager [-] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:39:47 np0005596060 nova_compute[247421]: 2026-01-26 18:39:47.518 247428 DEBUG nova.compute.manager [None req-7ae512d5-a1e3-41cc-9dcd-b26624931186 - - - - - -] [instance: 766b2be2-d46f-4f27-ad07-a91017eaddaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:39:47 np0005596060 nova_compute[247421]: 2026-01-26 18:39:47.534 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:47.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:47 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:39:47 np0005596060 nova_compute[247421]: 2026-01-26 18:39:47.830 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:47.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail; 4.4 KiB/s rd, 341 B/s wr, 7 op/s
Jan 26 13:39:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:49.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:49.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:39:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:39:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:51.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:39:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:51.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:52 np0005596060 nova_compute[247421]: 2026-01-26 18:39:52.582 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:39:52 np0005596060 nova_compute[247421]: 2026-01-26 18:39:52.832 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:53.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:53.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:39:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:55.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:55.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:39:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:39:57 np0005596060 nova_compute[247421]: 2026-01-26 18:39:57.583 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:39:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:57.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:39:57 np0005596060 podman[298592]: 2026-01-26 18:39:57.807649165 +0000 UTC m=+0.060015161 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 13:39:57 np0005596060 nova_compute[247421]: 2026-01-26 18:39:57.834 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:39:57 np0005596060 podman[298593]: 2026-01-26 18:39:57.837491376 +0000 UTC m=+0.089854312 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Jan 26 13:39:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:39:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:57.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:39:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:39:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:39:59.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:39:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:39:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:39:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:39:59.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 13:40:00 np0005596060 ceph-mon[74267]: overall HEALTH_OK
Jan 26 13:40:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:40:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:01.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:01.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:02 np0005596060 nova_compute[247421]: 2026-01-26 18:40:02.623 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:40:02 np0005596060 nova_compute[247421]: 2026-01-26 18:40:02.836 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:03.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:03.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:40:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:40:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:05.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:05.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.676800) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452806676905, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 475, "num_deletes": 252, "total_data_size": 490085, "memory_usage": 499760, "flush_reason": "Manual Compaction"}
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452806681324, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 378062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43524, "largest_seqno": 43998, "table_properties": {"data_size": 375464, "index_size": 634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6987, "raw_average_key_size": 20, "raw_value_size": 370185, "raw_average_value_size": 1091, "num_data_blocks": 28, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769452781, "oldest_key_time": 1769452781, "file_creation_time": 1769452806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 4557 microseconds, and 1881 cpu microseconds.
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.681366) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 378062 bytes OK
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.681385) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.682525) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.682539) EVENT_LOG_v1 {"time_micros": 1769452806682534, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.682557) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 487288, prev total WAL file size 487288, number of live WAL files 2.
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.683003) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353033' seq:72057594037927935, type:22 .. '6D6772737461740031373536' seq:0, type:0; will stop at (end)
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(369KB)], [95(11MB)]
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452806683125, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 12957992, "oldest_snapshot_seqno": -1}
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6757 keys, 9194977 bytes, temperature: kUnknown
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452806768767, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 9194977, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9153106, "index_size": 23903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 175096, "raw_average_key_size": 25, "raw_value_size": 9034936, "raw_average_value_size": 1337, "num_data_blocks": 947, "num_entries": 6757, "num_filter_entries": 6757, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769452806, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.769099) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 9194977 bytes
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.770932) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 151.0 rd, 107.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 12.0 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(58.6) write-amplify(24.3) OK, records in: 7266, records dropped: 509 output_compression: NoCompression
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.770967) EVENT_LOG_v1 {"time_micros": 1769452806770951, "job": 56, "event": "compaction_finished", "compaction_time_micros": 85805, "compaction_time_cpu_micros": 44930, "output_level": 6, "num_output_files": 1, "total_output_size": 9194977, "num_input_records": 7266, "num_output_records": 6757, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452806771526, "job": 56, "event": "table_file_deletion", "file_number": 97}
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452806774997, "job": 56, "event": "table_file_deletion", "file_number": 95}
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.682899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.775079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.775085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.775087) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.775089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:40:06 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:40:06.775091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:40:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:40:07 np0005596060 nova_compute[247421]: 2026-01-26 18:40:07.625 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:07.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:07 np0005596060 nova_compute[247421]: 2026-01-26 18:40:07.838 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:07.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:40:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:09.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:10.437 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:40:10 np0005596060 nova_compute[247421]: 2026-01-26 18:40:10.438 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:10.439 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:40:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 41 MiB data, 364 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:40:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:11.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:11.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:12 np0005596060 nova_compute[247421]: 2026-01-26 18:40:12.627 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 84 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 26 13:40:12 np0005596060 nova_compute[247421]: 2026-01-26 18:40:12.840 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:13.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:13.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:40:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:40:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:14.766 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:40:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:14.767 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:40:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:14.767 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:40:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 88 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:40:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:15.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:15.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 88 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 678 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 26 13:40:17 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:17.441 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:40:17 np0005596060 nova_compute[247421]: 2026-01-26 18:40:17.629 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:17.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:17 np0005596060 nova_compute[247421]: 2026-01-26 18:40:17.842 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:17.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:18 np0005596060 nova_compute[247421]: 2026-01-26 18:40:18.041 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:40:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 88 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 13:40:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:19.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:19.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 88 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 13:40:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:21.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:21.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:22 np0005596060 nova_compute[247421]: 2026-01-26 18:40:22.631 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:22 np0005596060 nova_compute[247421]: 2026-01-26 18:40:22.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:40:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 88 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 13:40:22 np0005596060 nova_compute[247421]: 2026-01-26 18:40:22.844 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:23 np0005596060 nova_compute[247421]: 2026-01-26 18:40:23.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:40:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:23.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:23.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:24 np0005596060 nova_compute[247421]: 2026-01-26 18:40:24.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:40:24 np0005596060 nova_compute[247421]: 2026-01-26 18:40:24.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:40:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 88 MiB data, 385 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 73 op/s
Jan 26 13:40:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:25.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:25.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:26 np0005596060 nova_compute[247421]: 2026-01-26 18:40:26.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:40:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 98 MiB data, 397 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1002 KiB/s wr, 84 op/s
Jan 26 13:40:27 np0005596060 nova_compute[247421]: 2026-01-26 18:40:27.634 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:27 np0005596060 nova_compute[247421]: 2026-01-26 18:40:27.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:40:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:27.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:27 np0005596060 nova_compute[247421]: 2026-01-26 18:40:27.845 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:27.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:28 np0005596060 ovn_controller[148842]: 2026-01-26T18:40:28Z|00166|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Jan 26 13:40:28 np0005596060 nova_compute[247421]: 2026-01-26 18:40:28.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:40:28 np0005596060 nova_compute[247421]: 2026-01-26 18:40:28.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:40:28 np0005596060 nova_compute[247421]: 2026-01-26 18:40:28.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:40:28 np0005596060 nova_compute[247421]: 2026-01-26 18:40:28.681 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:40:28 np0005596060 podman[298749]: 2026-01-26 18:40:28.795295996 +0000 UTC m=+0.059203871 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 13:40:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 113 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 90 op/s
Jan 26 13:40:28 np0005596060 podman[298750]: 2026-01-26 18:40:28.830361528 +0000 UTC m=+0.091100083 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Jan 26 13:40:29 np0005596060 nova_compute[247421]: 2026-01-26 18:40:29.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:40:29 np0005596060 nova_compute[247421]: 2026-01-26 18:40:29.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:40:29 np0005596060 nova_compute[247421]: 2026-01-26 18:40:29.688 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:40:29 np0005596060 nova_compute[247421]: 2026-01-26 18:40:29.689 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:40:29 np0005596060 nova_compute[247421]: 2026-01-26 18:40:29.689 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:40:29 np0005596060 nova_compute[247421]: 2026-01-26 18:40:29.689 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:40:29 np0005596060 nova_compute[247421]: 2026-01-26 18:40:29.690 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:40:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:29.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:29.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:40:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/713021678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.167 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.352 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.355 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4640MB free_disk=20.944076538085938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.355 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.355 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.423 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.424 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.441 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:40:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 113 MiB data, 409 MiB used, 21 GiB / 21 GiB avail; 169 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Jan 26 13:40:30 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:40:30 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002790003' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.905 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.911 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:40:30 np0005596060 nova_compute[247421]: 2026-01-26 18:40:30.929 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:40:31 np0005596060 nova_compute[247421]: 2026-01-26 18:40:31.206 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:40:31 np0005596060 nova_compute[247421]: 2026-01-26 18:40:31.207 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:40:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:31.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:31.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:32 np0005596060 nova_compute[247421]: 2026-01-26 18:40:32.636 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 118 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 400 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 13:40:32 np0005596060 nova_compute[247421]: 2026-01-26 18:40:32.846 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:33 np0005596060 nova_compute[247421]: 2026-01-26 18:40:33.203 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:40:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:33.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:33.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 407 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 26 13:40:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:35.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:35.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 408 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.264 247428 DEBUG nova.compute.manager [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.364 247428 DEBUG oslo_concurrency.lockutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.364 247428 DEBUG oslo_concurrency.lockutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.388 247428 DEBUG nova.objects.instance [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'pci_requests' on Instance uuid b81e40ad-cba8-4851-8245-5c3eb983b479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.407 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.407 247428 INFO nova.compute.claims [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.408 247428 DEBUG nova.objects.instance [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'resources' on Instance uuid b81e40ad-cba8-4851-8245-5c3eb983b479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.421 247428 DEBUG nova.objects.instance [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'pci_devices' on Instance uuid b81e40ad-cba8-4851-8245-5c3eb983b479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.460 247428 INFO nova.compute.resource_tracker [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Updating resource usage from migration cca79c36-9a99-47b8-a0b0-1908c615a3bc#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.461 247428 DEBUG nova.compute.resource_tracker [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Starting to track incoming migration cca79c36-9a99-47b8-a0b0-1908c615a3bc with flavor d6eed492-4ac8-4913-b8dd-e9e1922604e9 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.513 247428 DEBUG oslo_concurrency.processutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.639 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:37.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.850 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:40:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233536652' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.957 247428 DEBUG oslo_concurrency.processutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:40:37 np0005596060 nova_compute[247421]: 2026-01-26 18:40:37.966 247428 DEBUG nova.compute.provider_tree [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:40:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:37.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:38 np0005596060 nova_compute[247421]: 2026-01-26 18:40:38.179 247428 DEBUG nova.scheduler.client.report [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:40:38 np0005596060 nova_compute[247421]: 2026-01-26 18:40:38.749 247428 DEBUG oslo_concurrency.lockutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 1.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:40:38 np0005596060 nova_compute[247421]: 2026-01-26 18:40:38.750 247428 INFO nova.compute.manager [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Migrating#033[00m
Jan 26 13:40:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 402 KiB/s rd, 1.2 MiB/s wr, 57 op/s
Jan 26 13:40:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:39.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:39.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:40:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3811448079' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:40:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:40:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3811448079' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:40:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 239 KiB/s rd, 143 KiB/s wr, 27 op/s
Jan 26 13:40:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:41.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:41 np0005596060 systemd[1]: Created slice User Slice of UID 42436.
Jan 26 13:40:41 np0005596060 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 26 13:40:41 np0005596060 systemd-logind[786]: New session 52 of user nova.
Jan 26 13:40:41 np0005596060 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 26 13:40:41 np0005596060 systemd[1]: Starting User Manager for UID 42436...
Jan 26 13:40:41 np0005596060 systemd[298869]: Queued start job for default target Main User Target.
Jan 26 13:40:41 np0005596060 systemd[298869]: Created slice User Application Slice.
Jan 26 13:40:41 np0005596060 systemd[298869]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 26 13:40:41 np0005596060 systemd[298869]: Started Daily Cleanup of User's Temporary Directories.
Jan 26 13:40:41 np0005596060 systemd[298869]: Reached target Paths.
Jan 26 13:40:41 np0005596060 systemd[298869]: Reached target Timers.
Jan 26 13:40:41 np0005596060 systemd[298869]: Starting D-Bus User Message Bus Socket...
Jan 26 13:40:41 np0005596060 systemd[298869]: Starting Create User's Volatile Files and Directories...
Jan 26 13:40:41 np0005596060 systemd[298869]: Listening on D-Bus User Message Bus Socket.
Jan 26 13:40:41 np0005596060 systemd[298869]: Reached target Sockets.
Jan 26 13:40:41 np0005596060 systemd[298869]: Finished Create User's Volatile Files and Directories.
Jan 26 13:40:41 np0005596060 systemd[298869]: Reached target Basic System.
Jan 26 13:40:41 np0005596060 systemd[298869]: Reached target Main User Target.
Jan 26 13:40:41 np0005596060 systemd[298869]: Startup finished in 140ms.
Jan 26 13:40:41 np0005596060 systemd[1]: Started User Manager for UID 42436.
Jan 26 13:40:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:41.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:42 np0005596060 systemd[1]: Started Session 52 of User nova.
Jan 26 13:40:42 np0005596060 systemd[1]: session-52.scope: Deactivated successfully.
Jan 26 13:40:42 np0005596060 systemd-logind[786]: Session 52 logged out. Waiting for processes to exit.
Jan 26 13:40:42 np0005596060 systemd-logind[786]: Removed session 52.
Jan 26 13:40:42 np0005596060 systemd-logind[786]: New session 54 of user nova.
Jan 26 13:40:42 np0005596060 systemd[1]: Started Session 54 of User nova.
Jan 26 13:40:42 np0005596060 systemd[1]: session-54.scope: Deactivated successfully.
Jan 26 13:40:42 np0005596060 systemd-logind[786]: Session 54 logged out. Waiting for processes to exit.
Jan 26 13:40:42 np0005596060 systemd-logind[786]: Removed session 54.
Jan 26 13:40:42 np0005596060 nova_compute[247421]: 2026-01-26 18:40:42.641 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 239 KiB/s rd, 145 KiB/s wr, 27 op/s
Jan 26 13:40:42 np0005596060 nova_compute[247421]: 2026-01-26 18:40:42.852 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:43.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:43.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:40:44
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'images', 'volumes']
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 7.7 KiB/s rd, 19 KiB/s wr, 2 op/s
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:40:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:40:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:45.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:46.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:46 np0005596060 nova_compute[247421]: 2026-01-26 18:40:46.091 247428 DEBUG nova.compute.manager [req-35b8fb08-ad66-4629-8984-93930e480573 req-aaec5f7c-1c57-40e8-8447-ad06421f0c46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received event network-vif-unplugged-2e588806-3c53-401a-90f3-537e4176dcfe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:40:46 np0005596060 nova_compute[247421]: 2026-01-26 18:40:46.092 247428 DEBUG oslo_concurrency.lockutils [req-35b8fb08-ad66-4629-8984-93930e480573 req-aaec5f7c-1c57-40e8-8447-ad06421f0c46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:40:46 np0005596060 nova_compute[247421]: 2026-01-26 18:40:46.092 247428 DEBUG oslo_concurrency.lockutils [req-35b8fb08-ad66-4629-8984-93930e480573 req-aaec5f7c-1c57-40e8-8447-ad06421f0c46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:40:46 np0005596060 nova_compute[247421]: 2026-01-26 18:40:46.092 247428 DEBUG oslo_concurrency.lockutils [req-35b8fb08-ad66-4629-8984-93930e480573 req-aaec5f7c-1c57-40e8-8447-ad06421f0c46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:40:46 np0005596060 nova_compute[247421]: 2026-01-26 18:40:46.092 247428 DEBUG nova.compute.manager [req-35b8fb08-ad66-4629-8984-93930e480573 req-aaec5f7c-1c57-40e8-8447-ad06421f0c46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] No waiting events found dispatching network-vif-unplugged-2e588806-3c53-401a-90f3-537e4176dcfe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:40:46 np0005596060 nova_compute[247421]: 2026-01-26 18:40:46.092 247428 WARNING nova.compute.manager [req-35b8fb08-ad66-4629-8984-93930e480573 req-aaec5f7c-1c57-40e8-8447-ad06421f0c46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received unexpected event network-vif-unplugged-2e588806-3c53-401a-90f3-537e4176dcfe for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 26 13:40:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 3.5 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 26 13:40:46 np0005596060 nova_compute[247421]: 2026-01-26 18:40:46.874 247428 INFO nova.network.neutron [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Updating port 2e588806-3c53-401a-90f3-537e4176dcfe with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 26 13:40:47 np0005596060 nova_compute[247421]: 2026-01-26 18:40:47.419 247428 DEBUG oslo_concurrency.lockutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "refresh_cache-b81e40ad-cba8-4851-8245-5c3eb983b479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:40:47 np0005596060 nova_compute[247421]: 2026-01-26 18:40:47.419 247428 DEBUG oslo_concurrency.lockutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquired lock "refresh_cache-b81e40ad-cba8-4851-8245-5c3eb983b479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:40:47 np0005596060 nova_compute[247421]: 2026-01-26 18:40:47.419 247428 DEBUG nova.network.neutron [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:40:47 np0005596060 nova_compute[247421]: 2026-01-26 18:40:47.511 247428 DEBUG nova.compute.manager [req-35c47ea9-5abc-469d-ae09-9e5c3a6dbfd9 req-b8309978-0415-42e9-927b-0a790469acaf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received event network-changed-2e588806-3c53-401a-90f3-537e4176dcfe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:40:47 np0005596060 nova_compute[247421]: 2026-01-26 18:40:47.511 247428 DEBUG nova.compute.manager [req-35c47ea9-5abc-469d-ae09-9e5c3a6dbfd9 req-b8309978-0415-42e9-927b-0a790469acaf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Refreshing instance network info cache due to event network-changed-2e588806-3c53-401a-90f3-537e4176dcfe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:40:47 np0005596060 nova_compute[247421]: 2026-01-26 18:40:47.512 247428 DEBUG oslo_concurrency.lockutils [req-35c47ea9-5abc-469d-ae09-9e5c3a6dbfd9 req-b8309978-0415-42e9-927b-0a790469acaf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-b81e40ad-cba8-4851-8245-5c3eb983b479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:40:47 np0005596060 nova_compute[247421]: 2026-01-26 18:40:47.643 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:47.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:47 np0005596060 nova_compute[247421]: 2026-01-26 18:40:47.853 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:48.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:48 np0005596060 nova_compute[247421]: 2026-01-26 18:40:48.168 247428 DEBUG nova.compute.manager [req-5f1b7a57-5ac9-42ef-8ebd-813a4f243c22 req-9e2e0eb0-e415-45bf-9fb6-caa35b48e7f0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received event network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:40:48 np0005596060 nova_compute[247421]: 2026-01-26 18:40:48.168 247428 DEBUG oslo_concurrency.lockutils [req-5f1b7a57-5ac9-42ef-8ebd-813a4f243c22 req-9e2e0eb0-e415-45bf-9fb6-caa35b48e7f0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:40:48 np0005596060 nova_compute[247421]: 2026-01-26 18:40:48.168 247428 DEBUG oslo_concurrency.lockutils [req-5f1b7a57-5ac9-42ef-8ebd-813a4f243c22 req-9e2e0eb0-e415-45bf-9fb6-caa35b48e7f0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:40:48 np0005596060 nova_compute[247421]: 2026-01-26 18:40:48.169 247428 DEBUG oslo_concurrency.lockutils [req-5f1b7a57-5ac9-42ef-8ebd-813a4f243c22 req-9e2e0eb0-e415-45bf-9fb6-caa35b48e7f0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:40:48 np0005596060 nova_compute[247421]: 2026-01-26 18:40:48.169 247428 DEBUG nova.compute.manager [req-5f1b7a57-5ac9-42ef-8ebd-813a4f243c22 req-9e2e0eb0-e415-45bf-9fb6-caa35b48e7f0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] No waiting events found dispatching network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:40:48 np0005596060 nova_compute[247421]: 2026-01-26 18:40:48.169 247428 WARNING nova.compute.manager [req-5f1b7a57-5ac9-42ef-8ebd-813a4f243c22 req-9e2e0eb0-e415-45bf-9fb6-caa35b48e7f0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received unexpected event network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:40:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev a48be348-3f9f-45ec-bed5-4e72cc0525ac does not exist
Jan 26 13:40:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6a1b6d0e-a05a-4fd5-aca9-a93d11809252 does not exist
Jan 26 13:40:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 01151f78-135a-465d-b047-7bac0f8f22e6 does not exist
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:40:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:40:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 2.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Jan 26 13:40:48 np0005596060 podman[299337]: 2026-01-26 18:40:48.964025721 +0000 UTC m=+0.036655904 container create e6c8dd9b9e7e878ed1a73d92e31134252300c9872f5c442d293afbe9ced8d3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:40:49 np0005596060 systemd[1]: Started libpod-conmon-e6c8dd9b9e7e878ed1a73d92e31134252300c9872f5c442d293afbe9ced8d3d5.scope.
Jan 26 13:40:49 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:40:49 np0005596060 podman[299337]: 2026-01-26 18:40:48.947164837 +0000 UTC m=+0.019795040 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:40:49 np0005596060 podman[299337]: 2026-01-26 18:40:49.045100781 +0000 UTC m=+0.117730974 container init e6c8dd9b9e7e878ed1a73d92e31134252300c9872f5c442d293afbe9ced8d3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hamilton, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 13:40:49 np0005596060 podman[299337]: 2026-01-26 18:40:49.053662067 +0000 UTC m=+0.126292240 container start e6c8dd9b9e7e878ed1a73d92e31134252300c9872f5c442d293afbe9ced8d3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 13:40:49 np0005596060 podman[299337]: 2026-01-26 18:40:49.057870743 +0000 UTC m=+0.130500946 container attach e6c8dd9b9e7e878ed1a73d92e31134252300c9872f5c442d293afbe9ced8d3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 13:40:49 np0005596060 awesome_hamilton[299353]: 167 167
Jan 26 13:40:49 np0005596060 systemd[1]: libpod-e6c8dd9b9e7e878ed1a73d92e31134252300c9872f5c442d293afbe9ced8d3d5.scope: Deactivated successfully.
Jan 26 13:40:49 np0005596060 conmon[299353]: conmon e6c8dd9b9e7e878ed1a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e6c8dd9b9e7e878ed1a73d92e31134252300c9872f5c442d293afbe9ced8d3d5.scope/container/memory.events
Jan 26 13:40:49 np0005596060 podman[299337]: 2026-01-26 18:40:49.05936088 +0000 UTC m=+0.131991073 container died e6c8dd9b9e7e878ed1a73d92e31134252300c9872f5c442d293afbe9ced8d3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 26 13:40:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay-364416e566e6956866bb8c547cc206e72c71c430f1eea1954e8438d2f43bd66c-merged.mount: Deactivated successfully.
Jan 26 13:40:49 np0005596060 podman[299337]: 2026-01-26 18:40:49.094649118 +0000 UTC m=+0.167279301 container remove e6c8dd9b9e7e878ed1a73d92e31134252300c9872f5c442d293afbe9ced8d3d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 13:40:49 np0005596060 systemd[1]: libpod-conmon-e6c8dd9b9e7e878ed1a73d92e31134252300c9872f5c442d293afbe9ced8d3d5.scope: Deactivated successfully.
Jan 26 13:40:49 np0005596060 podman[299376]: 2026-01-26 18:40:49.267060358 +0000 UTC m=+0.048795179 container create 46816264a23dad8ba77e5337d7cb43d983f095f36968d01e1bc1b08af450a244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:40:49 np0005596060 systemd[1]: Started libpod-conmon-46816264a23dad8ba77e5337d7cb43d983f095f36968d01e1bc1b08af450a244.scope.
Jan 26 13:40:49 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:40:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2665455031ff281103a9feda15218b10457cd7bcb3fb54a839b97de0c00da909/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2665455031ff281103a9feda15218b10457cd7bcb3fb54a839b97de0c00da909/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2665455031ff281103a9feda15218b10457cd7bcb3fb54a839b97de0c00da909/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2665455031ff281103a9feda15218b10457cd7bcb3fb54a839b97de0c00da909/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2665455031ff281103a9feda15218b10457cd7bcb3fb54a839b97de0c00da909/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:49 np0005596060 podman[299376]: 2026-01-26 18:40:49.251327852 +0000 UTC m=+0.033062693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:40:49 np0005596060 podman[299376]: 2026-01-26 18:40:49.352246812 +0000 UTC m=+0.133981673 container init 46816264a23dad8ba77e5337d7cb43d983f095f36968d01e1bc1b08af450a244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:40:49 np0005596060 podman[299376]: 2026-01-26 18:40:49.361470474 +0000 UTC m=+0.143205305 container start 46816264a23dad8ba77e5337d7cb43d983f095f36968d01e1bc1b08af450a244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lumiere, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 13:40:49 np0005596060 podman[299376]: 2026-01-26 18:40:49.364798127 +0000 UTC m=+0.146532968 container attach 46816264a23dad8ba77e5337d7cb43d983f095f36968d01e1bc1b08af450a244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:40:49 np0005596060 nova_compute[247421]: 2026-01-26 18:40:49.399 247428 DEBUG nova.network.neutron [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Updating instance_info_cache with network_info: [{"id": "2e588806-3c53-401a-90f3-537e4176dcfe", "address": "fa:16:3e:24:50:d1", "network": {"id": "82e3f39f-8d87-4e62-a668-ee902f53c144", "bridge": "br-int", "label": "tempest-network-smoke--1049565076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e588806-3c", "ovs_interfaceid": "2e588806-3c53-401a-90f3-537e4176dcfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:40:49 np0005596060 nova_compute[247421]: 2026-01-26 18:40:49.419 247428 DEBUG oslo_concurrency.lockutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Releasing lock "refresh_cache-b81e40ad-cba8-4851-8245-5c3eb983b479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:40:49 np0005596060 nova_compute[247421]: 2026-01-26 18:40:49.423 247428 DEBUG oslo_concurrency.lockutils [req-35c47ea9-5abc-469d-ae09-9e5c3a6dbfd9 req-b8309978-0415-42e9-927b-0a790469acaf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-b81e40ad-cba8-4851-8245-5c3eb983b479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:40:49 np0005596060 nova_compute[247421]: 2026-01-26 18:40:49.423 247428 DEBUG nova.network.neutron [req-35c47ea9-5abc-469d-ae09-9e5c3a6dbfd9 req-b8309978-0415-42e9-927b-0a790469acaf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Refreshing network info cache for port 2e588806-3c53-401a-90f3-537e4176dcfe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:40:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:40:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:40:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:40:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:40:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:40:49 np0005596060 nova_compute[247421]: 2026-01-26 18:40:49.515 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 26 13:40:49 np0005596060 nova_compute[247421]: 2026-01-26 18:40:49.517 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 26 13:40:49 np0005596060 nova_compute[247421]: 2026-01-26 18:40:49.517 247428 INFO nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Creating image(s)#033[00m
Jan 26 13:40:49 np0005596060 nova_compute[247421]: 2026-01-26 18:40:49.556 247428 DEBUG nova.storage.rbd_utils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] creating snapshot(nova-resize) on rbd image(b81e40ad-cba8-4851-8245-5c3eb983b479_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 26 13:40:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:49.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:50.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:50 np0005596060 flamboyant_lumiere[299392]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:40:50 np0005596060 flamboyant_lumiere[299392]: --> relative data size: 1.0
Jan 26 13:40:50 np0005596060 flamboyant_lumiere[299392]: --> All data devices are unavailable
Jan 26 13:40:50 np0005596060 systemd[1]: libpod-46816264a23dad8ba77e5337d7cb43d983f095f36968d01e1bc1b08af450a244.scope: Deactivated successfully.
Jan 26 13:40:50 np0005596060 podman[299376]: 2026-01-26 18:40:50.196761186 +0000 UTC m=+0.978496017 container died 46816264a23dad8ba77e5337d7cb43d983f095f36968d01e1bc1b08af450a244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:40:50 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2665455031ff281103a9feda15218b10457cd7bcb3fb54a839b97de0c00da909-merged.mount: Deactivated successfully.
Jan 26 13:40:50 np0005596060 podman[299376]: 2026-01-26 18:40:50.255356991 +0000 UTC m=+1.037091812 container remove 46816264a23dad8ba77e5337d7cb43d983f095f36968d01e1bc1b08af450a244 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:40:50 np0005596060 systemd[1]: libpod-conmon-46816264a23dad8ba77e5337d7cb43d983f095f36968d01e1bc1b08af450a244.scope: Deactivated successfully.
Jan 26 13:40:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Jan 26 13:40:50 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Jan 26 13:40:50 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Jan 26 13:40:50 np0005596060 nova_compute[247421]: 2026-01-26 18:40:50.535 247428 DEBUG nova.objects.instance [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'trusted_certs' on Instance uuid b81e40ad-cba8-4851-8245-5c3eb983b479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:40:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 121 MiB data, 410 MiB used, 21 GiB / 21 GiB avail; 3.4 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 26 13:40:50 np0005596060 podman[299636]: 2026-01-26 18:40:50.878911023 +0000 UTC m=+0.044006568 container create 0cb30f32cadac84862718a3ec0c72768132e6130a90d5c8741fc286f8d1cf70d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:40:50 np0005596060 systemd[1]: Started libpod-conmon-0cb30f32cadac84862718a3ec0c72768132e6130a90d5c8741fc286f8d1cf70d.scope.
Jan 26 13:40:50 np0005596060 podman[299636]: 2026-01-26 18:40:50.85931992 +0000 UTC m=+0.024415505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:40:50 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:40:51 np0005596060 podman[299636]: 2026-01-26 18:40:51.012009213 +0000 UTC m=+0.177104778 container init 0cb30f32cadac84862718a3ec0c72768132e6130a90d5c8741fc286f8d1cf70d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:40:51 np0005596060 podman[299636]: 2026-01-26 18:40:51.021061571 +0000 UTC m=+0.186157116 container start 0cb30f32cadac84862718a3ec0c72768132e6130a90d5c8741fc286f8d1cf70d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:40:51 np0005596060 funny_chebyshev[299652]: 167 167
Jan 26 13:40:51 np0005596060 systemd[1]: libpod-0cb30f32cadac84862718a3ec0c72768132e6130a90d5c8741fc286f8d1cf70d.scope: Deactivated successfully.
Jan 26 13:40:51 np0005596060 conmon[299652]: conmon 0cb30f32cadac8486271 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0cb30f32cadac84862718a3ec0c72768132e6130a90d5c8741fc286f8d1cf70d.scope/container/memory.events
Jan 26 13:40:51 np0005596060 podman[299636]: 2026-01-26 18:40:51.037274609 +0000 UTC m=+0.202370154 container attach 0cb30f32cadac84862718a3ec0c72768132e6130a90d5c8741fc286f8d1cf70d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:40:51 np0005596060 podman[299636]: 2026-01-26 18:40:51.038145311 +0000 UTC m=+0.203240846 container died 0cb30f32cadac84862718a3ec0c72768132e6130a90d5c8741fc286f8d1cf70d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.116 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.117 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Ensure instance console log exists: /var/lib/nova/instances/b81e40ad-cba8-4851-8245-5c3eb983b479/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.118 247428 DEBUG oslo_concurrency.lockutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.118 247428 DEBUG oslo_concurrency.lockutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.118 247428 DEBUG oslo_concurrency.lockutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.121 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Start _get_guest_xml network_info=[{"id": "2e588806-3c53-401a-90f3-537e4176dcfe", "address": "fa:16:3e:24:50:d1", "network": {"id": "82e3f39f-8d87-4e62-a668-ee902f53c144", "bridge": "br-int", "label": "tempest-network-smoke--1049565076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1049565076", "vif_mac": "fa:16:3e:24:50:d1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e588806-3c", "ovs_interfaceid": "2e588806-3c53-401a-90f3-537e4176dcfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.127 247428 WARNING nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.131 247428 DEBUG nova.virt.libvirt.host [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.132 247428 DEBUG nova.virt.libvirt.host [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.135 247428 DEBUG nova.virt.libvirt.host [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.135 247428 DEBUG nova.virt.libvirt.host [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.136 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.137 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='d6eed492-4ac8-4913-b8dd-e9e1922604e9',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.137 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.137 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.137 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.138 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.138 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.138 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.138 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.138 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.139 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.139 247428 DEBUG nova.virt.hardware [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.139 247428 DEBUG nova.objects.instance [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'vcpu_model' on Instance uuid b81e40ad-cba8-4851-8245-5c3eb983b479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:40:51 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a1c08e704cf2df7833e1b9e5e2e0ab8f40d85c4f1ffc4304f59c4f47495c2f4d-merged.mount: Deactivated successfully.
Jan 26 13:40:51 np0005596060 podman[299636]: 2026-01-26 18:40:51.228245855 +0000 UTC m=+0.393341400 container remove 0cb30f32cadac84862718a3ec0c72768132e6130a90d5c8741fc286f8d1cf70d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_chebyshev, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:40:51 np0005596060 systemd[1]: libpod-conmon-0cb30f32cadac84862718a3ec0c72768132e6130a90d5c8741fc286f8d1cf70d.scope: Deactivated successfully.
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.254 247428 DEBUG oslo_concurrency.processutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:40:51 np0005596060 podman[299680]: 2026-01-26 18:40:51.396555201 +0000 UTC m=+0.045766883 container create 585c41509b21cd10ccbffccebe64b23e772d3d0f8a14a5e87639b448c0736cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lumiere, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.399 247428 DEBUG nova.network.neutron [req-35c47ea9-5abc-469d-ae09-9e5c3a6dbfd9 req-b8309978-0415-42e9-927b-0a790469acaf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Updated VIF entry in instance network info cache for port 2e588806-3c53-401a-90f3-537e4176dcfe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.400 247428 DEBUG nova.network.neutron [req-35c47ea9-5abc-469d-ae09-9e5c3a6dbfd9 req-b8309978-0415-42e9-927b-0a790469acaf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Updating instance_info_cache with network_info: [{"id": "2e588806-3c53-401a-90f3-537e4176dcfe", "address": "fa:16:3e:24:50:d1", "network": {"id": "82e3f39f-8d87-4e62-a668-ee902f53c144", "bridge": "br-int", "label": "tempest-network-smoke--1049565076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e588806-3c", "ovs_interfaceid": "2e588806-3c53-401a-90f3-537e4176dcfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:40:51 np0005596060 systemd[1]: Started libpod-conmon-585c41509b21cd10ccbffccebe64b23e772d3d0f8a14a5e87639b448c0736cb0.scope.
Jan 26 13:40:51 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:40:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea3a42248fec36c39e1e92c1243e82d638f77ecea2ae534fed62b4ee59b9803/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea3a42248fec36c39e1e92c1243e82d638f77ecea2ae534fed62b4ee59b9803/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea3a42248fec36c39e1e92c1243e82d638f77ecea2ae534fed62b4ee59b9803/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ea3a42248fec36c39e1e92c1243e82d638f77ecea2ae534fed62b4ee59b9803/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:51 np0005596060 podman[299680]: 2026-01-26 18:40:51.375778078 +0000 UTC m=+0.024989780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:40:51 np0005596060 podman[299680]: 2026-01-26 18:40:51.53241522 +0000 UTC m=+0.181626912 container init 585c41509b21cd10ccbffccebe64b23e772d3d0f8a14a5e87639b448c0736cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.532 247428 DEBUG oslo_concurrency.lockutils [req-35c47ea9-5abc-469d-ae09-9e5c3a6dbfd9 req-b8309978-0415-42e9-927b-0a790469acaf 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-b81e40ad-cba8-4851-8245-5c3eb983b479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:40:51 np0005596060 podman[299680]: 2026-01-26 18:40:51.540033172 +0000 UTC m=+0.189244844 container start 585c41509b21cd10ccbffccebe64b23e772d3d0f8a14a5e87639b448c0736cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lumiere, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:40:51 np0005596060 podman[299680]: 2026-01-26 18:40:51.549013148 +0000 UTC m=+0.198224850 container attach 585c41509b21cd10ccbffccebe64b23e772d3d0f8a14a5e87639b448c0736cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lumiere, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:40:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:40:51 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2182617815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.701 247428 DEBUG oslo_concurrency.processutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:40:51 np0005596060 nova_compute[247421]: 2026-01-26 18:40:51.749 247428 DEBUG oslo_concurrency.processutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:40:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:51.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:52.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:40:52 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/934707278' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.179 247428 DEBUG oslo_concurrency.processutils [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.181 247428 DEBUG nova.virt.libvirt.vif [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:40:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-766514569',display_name='tempest-TestNetworkAdvancedServerOps-server-766514569',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-766514569',id=27,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL7uQVm9s7C+OqbAh1CIPBxJi+6AkyPpWOPYYV7DcXbtYqg7663H86MBmiolT3Uacef2LD9/V7P8RfgEuQwZCVENs2yHMAD4P9rcdlzFL0K8Hhq6UoTOylf5rcW9T4i1Qg==',key_name='tempest-TestNetworkAdvancedServerOps-706838647',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:40:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-rq7teih3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:40:46Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=b81e40ad-cba8-4851-8245-5c3eb983b479,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e588806-3c53-401a-90f3-537e4176dcfe", "address": "fa:16:3e:24:50:d1", "network": {"id": "82e3f39f-8d87-4e62-a668-ee902f53c144", "bridge": "br-int", "label": "tempest-network-smoke--1049565076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1049565076", "vif_mac": "fa:16:3e:24:50:d1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e588806-3c", "ovs_interfaceid": "2e588806-3c53-401a-90f3-537e4176dcfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.182 247428 DEBUG nova.network.os_vif_util [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "2e588806-3c53-401a-90f3-537e4176dcfe", "address": "fa:16:3e:24:50:d1", "network": {"id": "82e3f39f-8d87-4e62-a668-ee902f53c144", "bridge": "br-int", "label": "tempest-network-smoke--1049565076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1049565076", "vif_mac": "fa:16:3e:24:50:d1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e588806-3c", "ovs_interfaceid": "2e588806-3c53-401a-90f3-537e4176dcfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.183 247428 DEBUG nova.network.os_vif_util [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:50:d1,bridge_name='br-int',has_traffic_filtering=True,id=2e588806-3c53-401a-90f3-537e4176dcfe,network=Network(82e3f39f-8d87-4e62-a668-ee902f53c144),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e588806-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.186 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <uuid>b81e40ad-cba8-4851-8245-5c3eb983b479</uuid>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <name>instance-0000001b</name>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <memory>196608</memory>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-766514569</nova:name>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:40:51</nova:creationTime>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.micro">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <nova:memory>192</nova:memory>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <nova:user uuid="ffa1cd7ba9e543f78f2ef48c2a7a67a2">tempest-TestNetworkAdvancedServerOps-1357272614-project-member</nova:user>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <nova:project uuid="301bad5c2066428fa7f214024672bf92">tempest-TestNetworkAdvancedServerOps-1357272614</nova:project>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <nova:port uuid="2e588806-3c53-401a-90f3-537e4176dcfe">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <entry name="serial">b81e40ad-cba8-4851-8245-5c3eb983b479</entry>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <entry name="uuid">b81e40ad-cba8-4851-8245-5c3eb983b479</entry>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/b81e40ad-cba8-4851-8245-5c3eb983b479_disk">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/b81e40ad-cba8-4851-8245-5c3eb983b479_disk.config">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:24:50:d1"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <target dev="tap2e588806-3c"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/b81e40ad-cba8-4851-8245-5c3eb983b479/console.log" append="off"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:40:52 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:40:52 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:40:52 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:40:52 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.187 247428 DEBUG nova.virt.libvirt.vif [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:40:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-766514569',display_name='tempest-TestNetworkAdvancedServerOps-server-766514569',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-766514569',id=27,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL7uQVm9s7C+OqbAh1CIPBxJi+6AkyPpWOPYYV7DcXbtYqg7663H86MBmiolT3Uacef2LD9/V7P8RfgEuQwZCVENs2yHMAD4P9rcdlzFL0K8Hhq6UoTOylf5rcW9T4i1Qg==',key_name='tempest-TestNetworkAdvancedServerOps-706838647',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:40:14Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-rq7teih3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:40:46Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=b81e40ad-cba8-4851-8245-5c3eb983b479,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e588806-3c53-401a-90f3-537e4176dcfe", "address": "fa:16:3e:24:50:d1", "network": {"id": "82e3f39f-8d87-4e62-a668-ee902f53c144", "bridge": "br-int", "label": "tempest-network-smoke--1049565076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1049565076", "vif_mac": "fa:16:3e:24:50:d1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e588806-3c", "ovs_interfaceid": "2e588806-3c53-401a-90f3-537e4176dcfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.187 247428 DEBUG nova.network.os_vif_util [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "2e588806-3c53-401a-90f3-537e4176dcfe", "address": "fa:16:3e:24:50:d1", "network": {"id": "82e3f39f-8d87-4e62-a668-ee902f53c144", "bridge": "br-int", "label": "tempest-network-smoke--1049565076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1049565076", "vif_mac": "fa:16:3e:24:50:d1"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e588806-3c", "ovs_interfaceid": "2e588806-3c53-401a-90f3-537e4176dcfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.188 247428 DEBUG nova.network.os_vif_util [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:50:d1,bridge_name='br-int',has_traffic_filtering=True,id=2e588806-3c53-401a-90f3-537e4176dcfe,network=Network(82e3f39f-8d87-4e62-a668-ee902f53c144),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e588806-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.188 247428 DEBUG os_vif [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:50:d1,bridge_name='br-int',has_traffic_filtering=True,id=2e588806-3c53-401a-90f3-537e4176dcfe,network=Network(82e3f39f-8d87-4e62-a668-ee902f53c144),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e588806-3c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.189 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.190 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.190 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.196 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.196 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2e588806-3c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.197 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2e588806-3c, col_values=(('external_ids', {'iface-id': '2e588806-3c53-401a-90f3-537e4176dcfe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:50:d1', 'vm-uuid': 'b81e40ad-cba8-4851-8245-5c3eb983b479'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.198 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 NetworkManager[48900]: <info>  [1769452852.1997] manager: (tap2e588806-3c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.201 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.206 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.206 247428 INFO os_vif [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:50:d1,bridge_name='br-int',has_traffic_filtering=True,id=2e588806-3c53-401a-90f3-537e4176dcfe,network=Network(82e3f39f-8d87-4e62-a668-ee902f53c144),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e588806-3c')#033[00m
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]: {
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:    "1": [
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:        {
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "devices": [
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "/dev/loop3"
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            ],
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "lv_name": "ceph_lv0",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "lv_size": "7511998464",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "name": "ceph_lv0",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "tags": {
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.cluster_name": "ceph",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.crush_device_class": "",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.encrypted": "0",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.osd_id": "1",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.type": "block",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:                "ceph.vdo": "0"
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            },
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "type": "block",
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:            "vg_name": "ceph_vg0"
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:        }
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]:    ]
Jan 26 13:40:52 np0005596060 dazzling_lumiere[299715]: }
Jan 26 13:40:52 np0005596060 systemd[1]: libpod-585c41509b21cd10ccbffccebe64b23e772d3d0f8a14a5e87639b448c0736cb0.scope: Deactivated successfully.
Jan 26 13:40:52 np0005596060 podman[299680]: 2026-01-26 18:40:52.346655603 +0000 UTC m=+0.995867285 container died 585c41509b21cd10ccbffccebe64b23e772d3d0f8a14a5e87639b448c0736cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lumiere, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:40:52 np0005596060 systemd[1]: Stopping User Manager for UID 42436...
Jan 26 13:40:52 np0005596060 systemd[298869]: Activating special unit Exit the Session...
Jan 26 13:40:52 np0005596060 systemd[298869]: Stopped target Main User Target.
Jan 26 13:40:52 np0005596060 systemd[298869]: Stopped target Basic System.
Jan 26 13:40:52 np0005596060 systemd[298869]: Stopped target Paths.
Jan 26 13:40:52 np0005596060 systemd[298869]: Stopped target Sockets.
Jan 26 13:40:52 np0005596060 systemd[298869]: Stopped target Timers.
Jan 26 13:40:52 np0005596060 systemd[298869]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 26 13:40:52 np0005596060 systemd[298869]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.351 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.352 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.352 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No VIF found with MAC fa:16:3e:24:50:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.353 247428 INFO nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Using config drive#033[00m
Jan 26 13:40:52 np0005596060 systemd[298869]: Closed D-Bus User Message Bus Socket.
Jan 26 13:40:52 np0005596060 systemd[298869]: Stopped Create User's Volatile Files and Directories.
Jan 26 13:40:52 np0005596060 systemd[298869]: Removed slice User Application Slice.
Jan 26 13:40:52 np0005596060 systemd[298869]: Reached target Shutdown.
Jan 26 13:40:52 np0005596060 systemd[298869]: Finished Exit the Session.
Jan 26 13:40:52 np0005596060 systemd[298869]: Reached target Exit the Session.
Jan 26 13:40:52 np0005596060 systemd[1]: user@42436.service: Deactivated successfully.
Jan 26 13:40:52 np0005596060 systemd[1]: Stopped User Manager for UID 42436.
Jan 26 13:40:52 np0005596060 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 26 13:40:52 np0005596060 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 26 13:40:52 np0005596060 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 26 13:40:52 np0005596060 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 26 13:40:52 np0005596060 systemd[1]: Removed slice User Slice of UID 42436.
Jan 26 13:40:52 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4ea3a42248fec36c39e1e92c1243e82d638f77ecea2ae534fed62b4ee59b9803-merged.mount: Deactivated successfully.
Jan 26 13:40:52 np0005596060 kernel: tap2e588806-3c: entered promiscuous mode
Jan 26 13:40:52 np0005596060 NetworkManager[48900]: <info>  [1769452852.4413] manager: (tap2e588806-3c): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Jan 26 13:40:52 np0005596060 podman[299680]: 2026-01-26 18:40:52.463122034 +0000 UTC m=+1.112333716 container remove 585c41509b21cd10ccbffccebe64b23e772d3d0f8a14a5e87639b448c0736cb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lumiere, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:40:52 np0005596060 systemd-udevd[299811]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:40:52 np0005596060 systemd[1]: libpod-conmon-585c41509b21cd10ccbffccebe64b23e772d3d0f8a14a5e87639b448c0736cb0.scope: Deactivated successfully.
Jan 26 13:40:52 np0005596060 ovn_controller[148842]: 2026-01-26T18:40:52Z|00167|binding|INFO|Claiming lport 2e588806-3c53-401a-90f3-537e4176dcfe for this chassis.
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.477 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 ovn_controller[148842]: 2026-01-26T18:40:52Z|00168|binding|INFO|2e588806-3c53-401a-90f3-537e4176dcfe: Claiming fa:16:3e:24:50:d1 10.100.0.7
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.481 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.486 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 NetworkManager[48900]: <info>  [1769452852.4896] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Jan 26 13:40:52 np0005596060 NetworkManager[48900]: <info>  [1769452852.4903] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.492 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:50:d1 10.100.0.7'], port_security=['fa:16:3e:24:50:d1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b81e40ad-cba8-4851-8245-5c3eb983b479', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82e3f39f-8d87-4e62-a668-ee902f53c144', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ff649c44-332a-4be4-82da-382a0117f640', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.213'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a7598a0-01e1-4002-824f-2c7bac3a3915, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=2e588806-3c53-401a-90f3-537e4176dcfe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.494 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 2e588806-3c53-401a-90f3-537e4176dcfe in datapath 82e3f39f-8d87-4e62-a668-ee902f53c144 bound to our chassis#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.495 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82e3f39f-8d87-4e62-a668-ee902f53c144#033[00m
Jan 26 13:40:52 np0005596060 NetworkManager[48900]: <info>  [1769452852.5053] device (tap2e588806-3c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:40:52 np0005596060 NetworkManager[48900]: <info>  [1769452852.5063] device (tap2e588806-3c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.511 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd32c96-70a3-4f53-af82-b32504d8ecba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.512 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82e3f39f-81 in ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:40:52 np0005596060 systemd-machined[213879]: New machine qemu-14-instance-0000001b.
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.516 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82e3f39f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.516 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d2174aca-b02f-4936-af78-96fac5859a1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.517 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bbfb43-3608-478b-81fc-5588ed25a5a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 systemd[1]: Started Virtual Machine qemu-14-instance-0000001b.
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.532 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[154f4fcf-c54a-48b7-91b2-1a9eedd2fb5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.563 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d5ebc5-f3ef-4da3-8678-88281c06245d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.573 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.587 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 ovn_controller[148842]: 2026-01-26T18:40:52Z|00169|binding|INFO|Setting lport 2e588806-3c53-401a-90f3-537e4176dcfe ovn-installed in OVS
Jan 26 13:40:52 np0005596060 ovn_controller[148842]: 2026-01-26T18:40:52Z|00170|binding|INFO|Setting lport 2e588806-3c53-401a-90f3-537e4176dcfe up in Southbound
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.600 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.602 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[f93304c2-cbd5-49a6-8d93-a0800bf4b497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 NetworkManager[48900]: <info>  [1769452852.6113] manager: (tap82e3f39f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Jan 26 13:40:52 np0005596060 systemd-udevd[299817]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.612 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e19f1a9d-97d3-4513-9658-83dc86b69c8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.646 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cbc8f8-26e1-431d-9984-29d918a05b6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.649 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[495dcacc-05ce-4bec-8d4b-803da9e61cd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 NetworkManager[48900]: <info>  [1769452852.6715] device (tap82e3f39f-80): carrier: link connected
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.678 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[c4739f25-eb1f-42a9-967c-991f1784065a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.696 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[87460e31-fad4-49a8-a8f9-9e1aa48cd137]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82e3f39f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:76:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661725, 'reachable_time': 44690, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299901, 'error': None, 'target': 'ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.713 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[bc7128a6-2685-4a81-8b9d-02e0b8c4544a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:7677'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661725, 'tstamp': 661725}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299916, 'error': None, 'target': 'ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.735 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d10fad5c-5a25-4f9c-87e9-3abdd02da756]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82e3f39f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:76:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 49], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661725, 'reachable_time': 44690, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299922, 'error': None, 'target': 'ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.776 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d09666e6-b3ad-4c6d-84c3-edd43b6abf87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 20 KiB/s wr, 24 op/s
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.848 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[37cf935c-eca3-4bb6-a6df-85261b2efbba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.850 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82e3f39f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.850 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.850 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82e3f39f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:40:52 np0005596060 NetworkManager[48900]: <info>  [1769452852.8531] manager: (tap82e3f39f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Jan 26 13:40:52 np0005596060 kernel: tap82e3f39f-80: entered promiscuous mode
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.854 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.855 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82e3f39f-80, col_values=(('external_ids', {'iface-id': 'e9b59e49-0dfa-4e26-ac57-5b753f5687f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.858 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82e3f39f-8d87-4e62-a668-ee902f53c144.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82e3f39f-8d87-4e62-a668-ee902f53c144.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:40:52 np0005596060 ovn_controller[148842]: 2026-01-26T18:40:52Z|00171|binding|INFO|Releasing lport e9b59e49-0dfa-4e26-ac57-5b753f5687f0 from this chassis (sb_readonly=0)
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.859 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[9babcb09-86bb-4277-8538-b43617d7fa76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.860 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-82e3f39f-8d87-4e62-a668-ee902f53c144
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/82e3f39f-8d87-4e62-a668-ee902f53c144.pid.haproxy
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 82e3f39f-8d87-4e62-a668-ee902f53c144
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:40:52 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:40:52.862 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144', 'env', 'PROCESS_TAG=haproxy-82e3f39f-8d87-4e62-a668-ee902f53c144', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82e3f39f-8d87-4e62-a668-ee902f53c144.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:40:52 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.872 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.998 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452852.998368, b81e40ad-cba8-4851-8245-5c3eb983b479 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:52.999 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:53.001 247428 DEBUG nova.compute.manager [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:53.004 247428 INFO nova.virt.libvirt.driver [-] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Instance running successfully.#033[00m
Jan 26 13:40:53 np0005596060 virtqemud[246749]: argument unsupported: QEMU guest agent is not configured
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:53.007 247428 DEBUG nova.virt.libvirt.guest [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:53.007 247428 DEBUG nova.virt.libvirt.driver [None req-a12c5a84-e832-4e79-b03e-68979bdf7f4f ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 26 13:40:53 np0005596060 podman[300039]: 2026-01-26 18:40:53.147947873 +0000 UTC m=+0.044434683 container create 8938a99fc30de9cb78b612d6a4f99b63f00198618482667d8768e8e7f3a49555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:40:53 np0005596060 systemd[1]: Started libpod-conmon-8938a99fc30de9cb78b612d6a4f99b63f00198618482667d8768e8e7f3a49555.scope.
Jan 26 13:40:53 np0005596060 podman[300039]: 2026-01-26 18:40:53.128000194 +0000 UTC m=+0.024487024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:40:53 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:40:53 np0005596060 podman[300039]: 2026-01-26 18:40:53.25251901 +0000 UTC m=+0.149005870 container init 8938a99fc30de9cb78b612d6a4f99b63f00198618482667d8768e8e7f3a49555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:40:53 np0005596060 podman[300039]: 2026-01-26 18:40:53.261099824 +0000 UTC m=+0.157586634 container start 8938a99fc30de9cb78b612d6a4f99b63f00198618482667d8768e8e7f3a49555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:40:53 np0005596060 adoring_dewdney[300076]: 167 167
Jan 26 13:40:53 np0005596060 systemd[1]: libpod-8938a99fc30de9cb78b612d6a4f99b63f00198618482667d8768e8e7f3a49555.scope: Deactivated successfully.
Jan 26 13:40:53 np0005596060 podman[300039]: 2026-01-26 18:40:53.269946096 +0000 UTC m=+0.166432906 container attach 8938a99fc30de9cb78b612d6a4f99b63f00198618482667d8768e8e7f3a49555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:40:53 np0005596060 conmon[300076]: conmon 8938a99fc30de9cb78b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8938a99fc30de9cb78b612d6a4f99b63f00198618482667d8768e8e7f3a49555.scope/container/memory.events
Jan 26 13:40:53 np0005596060 podman[300039]: 2026-01-26 18:40:53.271047623 +0000 UTC m=+0.167534433 container died 8938a99fc30de9cb78b612d6a4f99b63f00198618482667d8768e8e7f3a49555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:40:53 np0005596060 podman[300078]: 2026-01-26 18:40:53.289559757 +0000 UTC m=+0.066679070 container create dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:40:53 np0005596060 podman[300039]: 2026-01-26 18:40:53.326738507 +0000 UTC m=+0.223225307 container remove 8938a99fc30de9cb78b612d6a4f99b63f00198618482667d8768e8e7f3a49555 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_dewdney, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:40:53 np0005596060 systemd[1]: Started libpod-conmon-dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400.scope.
Jan 26 13:40:53 np0005596060 systemd[1]: libpod-conmon-8938a99fc30de9cb78b612d6a4f99b63f00198618482667d8768e8e7f3a49555.scope: Deactivated successfully.
Jan 26 13:40:53 np0005596060 podman[300078]: 2026-01-26 18:40:53.258904229 +0000 UTC m=+0.036023522 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:53.361 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:53.367 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:40:53 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:40:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ee44d0d9904e36c88b5b66b2c56d0005426f2501a1199c096c71bf68eae682f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:53 np0005596060 podman[300078]: 2026-01-26 18:40:53.394585265 +0000 UTC m=+0.171704548 container init dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 13:40:53 np0005596060 podman[300078]: 2026-01-26 18:40:53.399876327 +0000 UTC m=+0.176995600 container start dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:40:53 np0005596060 neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144[300108]: [NOTICE]   (300112) : New worker (300114) forked
Jan 26 13:40:53 np0005596060 neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144[300108]: [NOTICE]   (300112) : Loading success.
Jan 26 13:40:53 np0005596060 systemd[1]: var-lib-containers-storage-overlay-cb2d19babe0de14cf58163225736795f8600468c7266e8fc77dc8387ffc730ac-merged.mount: Deactivated successfully.
Jan 26 13:40:53 np0005596060 podman[300128]: 2026-01-26 18:40:53.506980727 +0000 UTC m=+0.042674519 container create c8075e5af681abcd75e0f0ad87dae4233ebdcc7ff30694912753164ff8a277c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:40:53 np0005596060 systemd[1]: Started libpod-conmon-c8075e5af681abcd75e0f0ad87dae4233ebdcc7ff30694912753164ff8a277c3.scope.
Jan 26 13:40:53 np0005596060 podman[300128]: 2026-01-26 18:40:53.490471694 +0000 UTC m=+0.026165506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:40:53 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:40:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250522b1f0bb30a6b05630b9dc5cf8a4b9ae628852b7944acb03677c60a8f03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250522b1f0bb30a6b05630b9dc5cf8a4b9ae628852b7944acb03677c60a8f03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250522b1f0bb30a6b05630b9dc5cf8a4b9ae628852b7944acb03677c60a8f03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250522b1f0bb30a6b05630b9dc5cf8a4b9ae628852b7944acb03677c60a8f03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:40:53 np0005596060 podman[300128]: 2026-01-26 18:40:53.614092177 +0000 UTC m=+0.149785989 container init c8075e5af681abcd75e0f0ad87dae4233ebdcc7ff30694912753164ff8a277c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 13:40:53 np0005596060 podman[300128]: 2026-01-26 18:40:53.624347784 +0000 UTC m=+0.160041576 container start c8075e5af681abcd75e0f0ad87dae4233ebdcc7ff30694912753164ff8a277c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:40:53 np0005596060 podman[300128]: 2026-01-26 18:40:53.628449757 +0000 UTC m=+0.164143549 container attach c8075e5af681abcd75e0f0ad87dae4233ebdcc7ff30694912753164ff8a277c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 13:40:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:53.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:53.833 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:53.834 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452853.000567, b81e40ad-cba8-4851-8245-5c3eb983b479 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:40:53 np0005596060 nova_compute[247421]: 2026-01-26 18:40:53.835 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] VM Started (Lifecycle Event)#033[00m
Jan 26 13:40:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:54.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:54 np0005596060 modest_babbage[300145]: {
Jan 26 13:40:54 np0005596060 modest_babbage[300145]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:40:54 np0005596060 modest_babbage[300145]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:40:54 np0005596060 modest_babbage[300145]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:40:54 np0005596060 modest_babbage[300145]:        "osd_id": 1,
Jan 26 13:40:54 np0005596060 modest_babbage[300145]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:40:54 np0005596060 modest_babbage[300145]:        "type": "bluestore"
Jan 26 13:40:54 np0005596060 modest_babbage[300145]:    }
Jan 26 13:40:54 np0005596060 modest_babbage[300145]: }
Jan 26 13:40:54 np0005596060 nova_compute[247421]: 2026-01-26 18:40:54.517 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:40:54 np0005596060 systemd[1]: libpod-c8075e5af681abcd75e0f0ad87dae4233ebdcc7ff30694912753164ff8a277c3.scope: Deactivated successfully.
Jan 26 13:40:54 np0005596060 conmon[300145]: conmon c8075e5af681abcd75e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8075e5af681abcd75e0f0ad87dae4233ebdcc7ff30694912753164ff8a277c3.scope/container/memory.events
Jan 26 13:40:54 np0005596060 nova_compute[247421]: 2026-01-26 18:40:54.523 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:40:54 np0005596060 podman[300167]: 2026-01-26 18:40:54.56215768 +0000 UTC m=+0.025856218 container died c8075e5af681abcd75e0f0ad87dae4233ebdcc7ff30694912753164ff8a277c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_babbage, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:40:54 np0005596060 nova_compute[247421]: 2026-01-26 18:40:54.566 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 26 13:40:54 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c250522b1f0bb30a6b05630b9dc5cf8a4b9ae628852b7944acb03677c60a8f03-merged.mount: Deactivated successfully.
Jan 26 13:40:54 np0005596060 nova_compute[247421]: 2026-01-26 18:40:54.606 247428 DEBUG nova.compute.manager [req-0210de89-6648-4a08-8373-04f7f0e7a53d req-e44f873c-5a8c-4818-97e3-fb75ec2c805a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received event network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:40:54 np0005596060 nova_compute[247421]: 2026-01-26 18:40:54.607 247428 DEBUG oslo_concurrency.lockutils [req-0210de89-6648-4a08-8373-04f7f0e7a53d req-e44f873c-5a8c-4818-97e3-fb75ec2c805a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:40:54 np0005596060 nova_compute[247421]: 2026-01-26 18:40:54.607 247428 DEBUG oslo_concurrency.lockutils [req-0210de89-6648-4a08-8373-04f7f0e7a53d req-e44f873c-5a8c-4818-97e3-fb75ec2c805a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:40:54 np0005596060 nova_compute[247421]: 2026-01-26 18:40:54.608 247428 DEBUG oslo_concurrency.lockutils [req-0210de89-6648-4a08-8373-04f7f0e7a53d req-e44f873c-5a8c-4818-97e3-fb75ec2c805a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:40:54 np0005596060 nova_compute[247421]: 2026-01-26 18:40:54.608 247428 DEBUG nova.compute.manager [req-0210de89-6648-4a08-8373-04f7f0e7a53d req-e44f873c-5a8c-4818-97e3-fb75ec2c805a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] No waiting events found dispatching network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:40:54 np0005596060 nova_compute[247421]: 2026-01-26 18:40:54.608 247428 WARNING nova.compute.manager [req-0210de89-6648-4a08-8373-04f7f0e7a53d req-e44f873c-5a8c-4818-97e3-fb75ec2c805a 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received unexpected event network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe for instance with vm_state resized and task_state None.#033[00m
Jan 26 13:40:54 np0005596060 podman[300167]: 2026-01-26 18:40:54.617795432 +0000 UTC m=+0.081493960 container remove c8075e5af681abcd75e0f0ad87dae4233ebdcc7ff30694912753164ff8a277c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_babbage, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 13:40:54 np0005596060 systemd[1]: libpod-conmon-c8075e5af681abcd75e0f0ad87dae4233ebdcc7ff30694912753164ff8a277c3.scope: Deactivated successfully.
Jan 26 13:40:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:40:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:40:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:40:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:40:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 567627b0-488d-40c0-8ccd-2b4faa567d90 does not exist
Jan 26 13:40:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 713ccc87-a9a1-4b20-a3a3-cd7ca14f8497 does not exist
Jan 26 13:40:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev a4674e61-f2f6-4623-bebf-fc9b73a23eb4 does not exist
Jan 26 13:40:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 20 KiB/s wr, 24 op/s
Jan 26 13:40:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:40:55 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:40:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:55.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:40:56 np0005596060 nova_compute[247421]: 2026-01-26 18:40:56.698 247428 DEBUG nova.compute.manager [req-cab344ae-4ed4-4551-8558-cf0e1fdd54bf req-a3009b4b-4be1-4234-8803-a9af2250e89d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received event network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:40:56 np0005596060 nova_compute[247421]: 2026-01-26 18:40:56.699 247428 DEBUG oslo_concurrency.lockutils [req-cab344ae-4ed4-4551-8558-cf0e1fdd54bf req-a3009b4b-4be1-4234-8803-a9af2250e89d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:40:56 np0005596060 nova_compute[247421]: 2026-01-26 18:40:56.699 247428 DEBUG oslo_concurrency.lockutils [req-cab344ae-4ed4-4551-8558-cf0e1fdd54bf req-a3009b4b-4be1-4234-8803-a9af2250e89d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:40:56 np0005596060 nova_compute[247421]: 2026-01-26 18:40:56.699 247428 DEBUG oslo_concurrency.lockutils [req-cab344ae-4ed4-4551-8558-cf0e1fdd54bf req-a3009b4b-4be1-4234-8803-a9af2250e89d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:40:56 np0005596060 nova_compute[247421]: 2026-01-26 18:40:56.699 247428 DEBUG nova.compute.manager [req-cab344ae-4ed4-4551-8558-cf0e1fdd54bf req-a3009b4b-4be1-4234-8803-a9af2250e89d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] No waiting events found dispatching network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:40:56 np0005596060 nova_compute[247421]: 2026-01-26 18:40:56.700 247428 WARNING nova.compute.manager [req-cab344ae-4ed4-4551-8558-cf0e1fdd54bf req-a3009b4b-4be1-4234-8803-a9af2250e89d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received unexpected event network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe for instance with vm_state resized and task_state None.#033[00m
Jan 26 13:40:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 812 KiB/s rd, 18 KiB/s wr, 52 op/s
Jan 26 13:40:57 np0005596060 nova_compute[247421]: 2026-01-26 18:40:57.199 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:57.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:57 np0005596060 nova_compute[247421]: 2026-01-26 18:40:57.859 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:40:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:40:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:40:58.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:40:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 409 B/s wr, 103 op/s
Jan 26 13:40:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:40:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:40:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:40:59.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:40:59 np0005596060 podman[300234]: 2026-01-26 18:40:59.823151705 +0000 UTC m=+0.081405978 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 26 13:40:59 np0005596060 podman[300235]: 2026-01-26 18:40:59.841008202 +0000 UTC m=+0.099279485 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 26 13:41:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:00.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 395 B/s wr, 100 op/s
Jan 26 13:41:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Jan 26 13:41:00 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Jan 26 13:41:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Jan 26 13:41:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:01.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:02.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:02 np0005596060 nova_compute[247421]: 2026-01-26 18:41:02.201 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 511 B/s wr, 93 op/s
Jan 26 13:41:02 np0005596060 nova_compute[247421]: 2026-01-26 18:41:02.861 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:03.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021730469477014703 of space, bias 1.0, pg target 0.6519140843104411 quantized to 32 (current 32)
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:41:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:04.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 511 B/s wr, 93 op/s
Jan 26 13:41:05 np0005596060 ovn_controller[148842]: 2026-01-26T18:41:05Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:50:d1 10.100.0.7
Jan 26 13:41:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:05.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:06.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Jan 26 13:41:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Jan 26 13:41:06 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Jan 26 13:41:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 439 KiB/s rd, 1.6 KiB/s wr, 37 op/s
Jan 26 13:41:07 np0005596060 nova_compute[247421]: 2026-01-26 18:41:07.202 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:07.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:07 np0005596060 nova_compute[247421]: 2026-01-26 18:41:07.862 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:08.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 802 KiB/s rd, 19 KiB/s wr, 78 op/s
Jan 26 13:41:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:09.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:10.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 648 KiB/s rd, 15 KiB/s wr, 63 op/s
Jan 26 13:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:41:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 19K writes, 69K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s#012Cumulative WAL: 19K writes, 6301 syncs, 3.10 writes per sync, written: 0.05 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2557 writes, 8679 keys, 2557 commit groups, 1.0 writes per commit group, ingest: 9.15 MB, 0.02 MB/s#012Interval WAL: 2557 writes, 1065 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 13:41:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:11.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:12.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:12 np0005596060 nova_compute[247421]: 2026-01-26 18:41:12.204 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:12 np0005596060 nova_compute[247421]: 2026-01-26 18:41:12.272 247428 INFO nova.compute.manager [None req-cb2f231c-5f70-4f5f-8a32-105b7aab1075 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Get console output#033[00m
Jan 26 13:41:12 np0005596060 nova_compute[247421]: 2026-01-26 18:41:12.276 285734 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 26 13:41:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 122 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 634 KiB/s rd, 16 KiB/s wr, 53 op/s
Jan 26 13:41:12 np0005596060 nova_compute[247421]: 2026-01-26 18:41:12.896 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:13.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:14.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:41:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:41:14 np0005596060 nova_compute[247421]: 2026-01-26 18:41:14.692 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:14.692 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:41:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:14.694 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:41:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:14.767 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:14.768 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:14.768 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 122 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 634 KiB/s rd, 27 KiB/s wr, 53 op/s
Jan 26 13:41:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:15.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.901 247428 DEBUG nova.compute.manager [req-d38b9a5b-d0b7-4096-ae0f-354d5fc69cbc req-97e056e1-ad6e-4662-bd46-a5ea79bceb8c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received event network-changed-2e588806-3c53-401a-90f3-537e4176dcfe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.901 247428 DEBUG nova.compute.manager [req-d38b9a5b-d0b7-4096-ae0f-354d5fc69cbc req-97e056e1-ad6e-4662-bd46-a5ea79bceb8c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Refreshing instance network info cache due to event network-changed-2e588806-3c53-401a-90f3-537e4176dcfe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.901 247428 DEBUG oslo_concurrency.lockutils [req-d38b9a5b-d0b7-4096-ae0f-354d5fc69cbc req-97e056e1-ad6e-4662-bd46-a5ea79bceb8c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-b81e40ad-cba8-4851-8245-5c3eb983b479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.902 247428 DEBUG oslo_concurrency.lockutils [req-d38b9a5b-d0b7-4096-ae0f-354d5fc69cbc req-97e056e1-ad6e-4662-bd46-a5ea79bceb8c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-b81e40ad-cba8-4851-8245-5c3eb983b479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.902 247428 DEBUG nova.network.neutron [req-d38b9a5b-d0b7-4096-ae0f-354d5fc69cbc req-97e056e1-ad6e-4662-bd46-a5ea79bceb8c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Refreshing network info cache for port 2e588806-3c53-401a-90f3-537e4176dcfe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.938 247428 DEBUG oslo_concurrency.lockutils [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "b81e40ad-cba8-4851-8245-5c3eb983b479" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.939 247428 DEBUG oslo_concurrency.lockutils [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.939 247428 DEBUG oslo_concurrency.lockutils [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.939 247428 DEBUG oslo_concurrency.lockutils [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.940 247428 DEBUG oslo_concurrency.lockutils [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.941 247428 INFO nova.compute.manager [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Terminating instance#033[00m
Jan 26 13:41:15 np0005596060 nova_compute[247421]: 2026-01-26 18:41:15.942 247428 DEBUG nova.compute.manager [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:41:15 np0005596060 kernel: tap2e588806-3c (unregistering): left promiscuous mode
Jan 26 13:41:15 np0005596060 NetworkManager[48900]: <info>  [1769452875.9915] device (tap2e588806-3c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:41:15 np0005596060 ovn_controller[148842]: 2026-01-26T18:41:15Z|00172|binding|INFO|Releasing lport 2e588806-3c53-401a-90f3-537e4176dcfe from this chassis (sb_readonly=0)
Jan 26 13:41:16 np0005596060 ovn_controller[148842]: 2026-01-26T18:41:15Z|00173|binding|INFO|Setting lport 2e588806-3c53-401a-90f3-537e4176dcfe down in Southbound
Jan 26 13:41:16 np0005596060 ovn_controller[148842]: 2026-01-26T18:41:15Z|00174|binding|INFO|Removing iface tap2e588806-3c ovn-installed in OVS
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.001 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.005 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:50:d1 10.100.0.7'], port_security=['fa:16:3e:24:50:d1 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b81e40ad-cba8-4851-8245-5c3eb983b479', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82e3f39f-8d87-4e62-a668-ee902f53c144', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'ff649c44-332a-4be4-82da-382a0117f640', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9a7598a0-01e1-4002-824f-2c7bac3a3915, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=2e588806-3c53-401a-90f3-537e4176dcfe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.006 159331 INFO neutron.agent.ovn.metadata.agent [-] Port 2e588806-3c53-401a-90f3-537e4176dcfe in datapath 82e3f39f-8d87-4e62-a668-ee902f53c144 unbound from our chassis#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.007 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82e3f39f-8d87-4e62-a668-ee902f53c144, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.009 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b46368-5909-416c-954b-b301af511553]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.010 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144 namespace which is not needed anymore#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.018 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:16.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:16 np0005596060 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 26 13:41:16 np0005596060 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001b.scope: Consumed 13.127s CPU time.
Jan 26 13:41:16 np0005596060 systemd-machined[213879]: Machine qemu-14-instance-0000001b terminated.
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.161 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.167 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.178 247428 INFO nova.virt.libvirt.driver [-] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Instance destroyed successfully.#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.178 247428 DEBUG nova.objects.instance [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'resources' on Instance uuid b81e40ad-cba8-4851-8245-5c3eb983b479 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.208 247428 DEBUG nova.virt.libvirt.vif [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:40:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-766514569',display_name='tempest-TestNetworkAdvancedServerOps-server-766514569',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-766514569',id=27,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL7uQVm9s7C+OqbAh1CIPBxJi+6AkyPpWOPYYV7DcXbtYqg7663H86MBmiolT3Uacef2LD9/V7P8RfgEuQwZCVENs2yHMAD4P9rcdlzFL0K8Hhq6UoTOylf5rcW9T4i1Qg==',key_name='tempest-TestNetworkAdvancedServerOps-706838647',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:40:53Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-rq7teih3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:41:03Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=b81e40ad-cba8-4851-8245-5c3eb983b479,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2e588806-3c53-401a-90f3-537e4176dcfe", "address": "fa:16:3e:24:50:d1", "network": {"id": "82e3f39f-8d87-4e62-a668-ee902f53c144", "bridge": "br-int", "label": "tempest-network-smoke--1049565076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e588806-3c", "ovs_interfaceid": "2e588806-3c53-401a-90f3-537e4176dcfe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.208 247428 DEBUG nova.network.os_vif_util [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "2e588806-3c53-401a-90f3-537e4176dcfe", "address": "fa:16:3e:24:50:d1", "network": {"id": "82e3f39f-8d87-4e62-a668-ee902f53c144", "bridge": "br-int", "label": "tempest-network-smoke--1049565076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.213", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e588806-3c", "ovs_interfaceid": "2e588806-3c53-401a-90f3-537e4176dcfe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.209 247428 DEBUG nova.network.os_vif_util [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:50:d1,bridge_name='br-int',has_traffic_filtering=True,id=2e588806-3c53-401a-90f3-537e4176dcfe,network=Network(82e3f39f-8d87-4e62-a668-ee902f53c144),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e588806-3c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.209 247428 DEBUG os_vif [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:50:d1,bridge_name='br-int',has_traffic_filtering=True,id=2e588806-3c53-401a-90f3-537e4176dcfe,network=Network(82e3f39f-8d87-4e62-a668-ee902f53c144),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e588806-3c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.212 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.212 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2e588806-3c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.213 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.215 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.217 247428 INFO os_vif [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:50:d1,bridge_name='br-int',has_traffic_filtering=True,id=2e588806-3c53-401a-90f3-537e4176dcfe,network=Network(82e3f39f-8d87-4e62-a668-ee902f53c144),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2e588806-3c')#033[00m
Jan 26 13:41:16 np0005596060 neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144[300108]: [NOTICE]   (300112) : haproxy version is 2.8.14-c23fe91
Jan 26 13:41:16 np0005596060 neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144[300108]: [NOTICE]   (300112) : path to executable is /usr/sbin/haproxy
Jan 26 13:41:16 np0005596060 neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144[300108]: [WARNING]  (300112) : Exiting Master process...
Jan 26 13:41:16 np0005596060 neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144[300108]: [ALERT]    (300112) : Current worker (300114) exited with code 143 (Terminated)
Jan 26 13:41:16 np0005596060 neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144[300108]: [WARNING]  (300112) : All workers exited. Exiting... (0)
Jan 26 13:41:16 np0005596060 systemd[1]: libpod-dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400.scope: Deactivated successfully.
Jan 26 13:41:16 np0005596060 podman[300363]: 2026-01-26 18:41:16.286902094 +0000 UTC m=+0.163018760 container died dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 13:41:16 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400-userdata-shm.mount: Deactivated successfully.
Jan 26 13:41:16 np0005596060 systemd[1]: var-lib-containers-storage-overlay-5ee44d0d9904e36c88b5b66b2c56d0005426f2501a1199c096c71bf68eae682f-merged.mount: Deactivated successfully.
Jan 26 13:41:16 np0005596060 podman[300363]: 2026-01-26 18:41:16.476907338 +0000 UTC m=+0.353023984 container cleanup dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 13:41:16 np0005596060 systemd[1]: libpod-conmon-dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400.scope: Deactivated successfully.
Jan 26 13:41:16 np0005596060 podman[300422]: 2026-01-26 18:41:16.557799563 +0000 UTC m=+0.052529346 container remove dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.564 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e89112-6f85-47b5-8d47-ae4c98c2c50d]: (4, ('Mon Jan 26 06:41:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144 (dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400)\ndd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400\nMon Jan 26 06:41:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144 (dd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400)\ndd24eb2981184b74efcdf5554818c42ac8342021303a7f1e4af707943b3b3400\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.566 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[9a75d16d-e556-4c94-8618-03e200796d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.567 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82e3f39f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.569 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:16 np0005596060 kernel: tap82e3f39f-80: left promiscuous mode
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.582 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.585 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ce87366a-b34d-4962-befc-d19b7f502ab5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.600 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ec422831-3abf-43f3-a408-c5860b760c2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.601 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5a7cacb6-9a0d-4291-993a-5ce3dd780b61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.622 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3617a2-7168-4c17-a3bf-5c8dfe1ddd91]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661717, 'reachable_time': 21441, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300439, 'error': None, 'target': 'ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.624 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82e3f39f-8d87-4e62-a668-ee902f53c144 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:41:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:16.625 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[41e68062-e3a7-411a-a302-4a152fd458e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:41:16 np0005596060 systemd[1]: run-netns-ovnmeta\x2d82e3f39f\x2d8d87\x2d4e62\x2da668\x2dee902f53c144.mount: Deactivated successfully.
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.675 247428 INFO nova.virt.libvirt.driver [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Deleting instance files /var/lib/nova/instances/b81e40ad-cba8-4851-8245-5c3eb983b479_del#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.676 247428 INFO nova.virt.libvirt.driver [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Deletion of /var/lib/nova/instances/b81e40ad-cba8-4851-8245-5c3eb983b479_del complete#033[00m
Jan 26 13:41:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.734 247428 INFO nova.compute.manager [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.734 247428 DEBUG oslo.service.loopingcall [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.735 247428 DEBUG nova.compute.manager [-] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:41:16 np0005596060 nova_compute[247421]: 2026-01-26 18:41:16.735 247428 DEBUG nova.network.neutron [-] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:41:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 94 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 301 KiB/s rd, 27 KiB/s wr, 50 op/s
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.398 247428 DEBUG nova.network.neutron [-] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.400 247428 DEBUG nova.network.neutron [req-d38b9a5b-d0b7-4096-ae0f-354d5fc69cbc req-97e056e1-ad6e-4662-bd46-a5ea79bceb8c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Updated VIF entry in instance network info cache for port 2e588806-3c53-401a-90f3-537e4176dcfe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.400 247428 DEBUG nova.network.neutron [req-d38b9a5b-d0b7-4096-ae0f-354d5fc69cbc req-97e056e1-ad6e-4662-bd46-a5ea79bceb8c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Updating instance_info_cache with network_info: [{"id": "2e588806-3c53-401a-90f3-537e4176dcfe", "address": "fa:16:3e:24:50:d1", "network": {"id": "82e3f39f-8d87-4e62-a668-ee902f53c144", "bridge": "br-int", "label": "tempest-network-smoke--1049565076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2e588806-3c", "ovs_interfaceid": "2e588806-3c53-401a-90f3-537e4176dcfe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.420 247428 DEBUG oslo_concurrency.lockutils [req-d38b9a5b-d0b7-4096-ae0f-354d5fc69cbc req-97e056e1-ad6e-4662-bd46-a5ea79bceb8c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-b81e40ad-cba8-4851-8245-5c3eb983b479" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.422 247428 INFO nova.compute.manager [-] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Took 0.69 seconds to deallocate network for instance.#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.464 247428 DEBUG oslo_concurrency.lockutils [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.465 247428 DEBUG oslo_concurrency.lockutils [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.473 247428 DEBUG nova.compute.manager [req-8389ccb3-9ee9-4fbd-86f2-a383b82e0a9b req-a07c0875-855f-44d3-a1fb-2062ea4e2710 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received event network-vif-deleted-2e588806-3c53-401a-90f3-537e4176dcfe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.474 247428 DEBUG oslo_concurrency.lockutils [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.506 247428 INFO nova.scheduler.client.report [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Deleted allocations for instance b81e40ad-cba8-4851-8245-5c3eb983b479#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.558 247428 DEBUG oslo_concurrency.lockutils [None req-deaf0db4-e4a7-4dd3-8703-10d5245ad956 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:41:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:17.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:17 np0005596060 nova_compute[247421]: 2026-01-26 18:41:17.898 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:18.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.072 247428 DEBUG nova.compute.manager [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received event network-vif-unplugged-2e588806-3c53-401a-90f3-537e4176dcfe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.072 247428 DEBUG oslo_concurrency.lockutils [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.072 247428 DEBUG oslo_concurrency.lockutils [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.072 247428 DEBUG oslo_concurrency.lockutils [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.072 247428 DEBUG nova.compute.manager [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] No waiting events found dispatching network-vif-unplugged-2e588806-3c53-401a-90f3-537e4176dcfe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.073 247428 WARNING nova.compute.manager [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received unexpected event network-vif-unplugged-2e588806-3c53-401a-90f3-537e4176dcfe for instance with vm_state deleted and task_state None.#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.073 247428 DEBUG nova.compute.manager [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received event network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.073 247428 DEBUG oslo_concurrency.lockutils [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.073 247428 DEBUG oslo_concurrency.lockutils [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.073 247428 DEBUG oslo_concurrency.lockutils [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "b81e40ad-cba8-4851-8245-5c3eb983b479-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.073 247428 DEBUG nova.compute.manager [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] No waiting events found dispatching network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:41:18 np0005596060 nova_compute[247421]: 2026-01-26 18:41:18.074 247428 WARNING nova.compute.manager [req-111732c0-4486-493c-9ee2-7aa9fb73e3da req-c28f40bd-9339-46b7-9415-e7b3b2ce6009 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Received unexpected event network-vif-plugged-2e588806-3c53-401a-90f3-537e4176dcfe for instance with vm_state deleted and task_state None.#033[00m
Jan 26 13:41:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 41 MiB data, 388 MiB used, 21 GiB / 21 GiB avail; 263 KiB/s rd, 23 KiB/s wr, 56 op/s
Jan 26 13:41:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:19.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:20.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 41 MiB data, 388 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 26 13:41:21 np0005596060 nova_compute[247421]: 2026-01-26 18:41:21.215 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:21 np0005596060 nova_compute[247421]: 2026-01-26 18:41:21.616 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:21 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:41:21.696 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:41:21 np0005596060 nova_compute[247421]: 2026-01-26 18:41:21.715 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:21.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:22.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 26 13:41:22 np0005596060 nova_compute[247421]: 2026-01-26 18:41:22.900 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:23 np0005596060 nova_compute[247421]: 2026-01-26 18:41:23.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:41:23 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] Check health
Jan 26 13:41:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:23.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:24.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:24 np0005596060 nova_compute[247421]: 2026-01-26 18:41:24.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:41:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 9.9 KiB/s wr, 28 op/s
Jan 26 13:41:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:25.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:26.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:26 np0005596060 nova_compute[247421]: 2026-01-26 18:41:26.218 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:26 np0005596060 nova_compute[247421]: 2026-01-26 18:41:26.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:41:26 np0005596060 nova_compute[247421]: 2026-01-26 18:41:26.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:41:26 np0005596060 nova_compute[247421]: 2026-01-26 18:41:26.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:41:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 13:41:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:27.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:27 np0005596060 nova_compute[247421]: 2026-01-26 18:41:27.902 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:28.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:28 np0005596060 nova_compute[247421]: 2026-01-26 18:41:28.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:41:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 9.0 KiB/s rd, 341 B/s wr, 13 op/s
Jan 26 13:41:29 np0005596060 nova_compute[247421]: 2026-01-26 18:41:29.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:41:29 np0005596060 nova_compute[247421]: 2026-01-26 18:41:29.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:41:29 np0005596060 nova_compute[247421]: 2026-01-26 18:41:29.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:41:29 np0005596060 nova_compute[247421]: 2026-01-26 18:41:29.702 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:41:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:29.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:30.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:30 np0005596060 nova_compute[247421]: 2026-01-26 18:41:30.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:41:30 np0005596060 podman[300498]: 2026-01-26 18:41:30.789710654 +0000 UTC m=+0.056982977 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:41:30 np0005596060 nova_compute[247421]: 2026-01-26 18:41:30.822 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:30 np0005596060 nova_compute[247421]: 2026-01-26 18:41:30.823 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:30 np0005596060 nova_compute[247421]: 2026-01-26 18:41:30.823 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:30 np0005596060 nova_compute[247421]: 2026-01-26 18:41:30.823 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:41:30 np0005596060 nova_compute[247421]: 2026-01-26 18:41:30.823 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:41:30 np0005596060 podman[300499]: 2026-01-26 18:41:30.82391529 +0000 UTC m=+0.089853180 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 13:41:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.176 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769452876.1758046, b81e40ad-cba8-4851-8245-5c3eb983b479 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.177 247428 INFO nova.compute.manager [-] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.207 247428 DEBUG nova.compute.manager [None req-04342e1f-5331-48cb-9450-341d3a4facc7 - - - - - -] [instance: b81e40ad-cba8-4851-8245-5c3eb983b479] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:41:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:41:31 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1109980781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.264 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.286 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.436 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.437 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4663MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.437 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.438 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.567 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.568 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:41:31 np0005596060 nova_compute[247421]: 2026-01-26 18:41:31.603 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:41:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:31.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:41:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2459554457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:41:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:32.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:32 np0005596060 nova_compute[247421]: 2026-01-26 18:41:32.070 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:41:32 np0005596060 nova_compute[247421]: 2026-01-26 18:41:32.075 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:41:32 np0005596060 nova_compute[247421]: 2026-01-26 18:41:32.112 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:41:32 np0005596060 nova_compute[247421]: 2026-01-26 18:41:32.178 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:41:32 np0005596060 nova_compute[247421]: 2026-01-26 18:41:32.178 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 26 13:41:32 np0005596060 nova_compute[247421]: 2026-01-26 18:41:32.904 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:33 np0005596060 nova_compute[247421]: 2026-01-26 18:41:33.178 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:41:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:33.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:34.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:41:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:35.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:41:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:36 np0005596060 nova_compute[247421]: 2026-01-26 18:41:36.268 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:37.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:37 np0005596060 nova_compute[247421]: 2026-01-26 18:41:37.905 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:38.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:39.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:40.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:41:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1585459160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:41:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:41:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1585459160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:41:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:41 np0005596060 nova_compute[247421]: 2026-01-26 18:41:41.272 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:41.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:42.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:42 np0005596060 nova_compute[247421]: 2026-01-26 18:41:42.906 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:43.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:44.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:41:44
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['backups', 'volumes', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.log']
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:41:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:41:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:45.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:46.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:46 np0005596060 nova_compute[247421]: 2026-01-26 18:41:46.323 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:47.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:47 np0005596060 nova_compute[247421]: 2026-01-26 18:41:47.908 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:48.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:48 np0005596060 nova_compute[247421]: 2026-01-26 18:41:48.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:41:48 np0005596060 nova_compute[247421]: 2026-01-26 18:41:48.651 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:48 np0005596060 nova_compute[247421]: 2026-01-26 18:41:48.651 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:48 np0005596060 nova_compute[247421]: 2026-01-26 18:41:48.652 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:48 np0005596060 nova_compute[247421]: 2026-01-26 18:41:48.652 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:48 np0005596060 nova_compute[247421]: 2026-01-26 18:41:48.652 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:48 np0005596060 nova_compute[247421]: 2026-01-26 18:41:48.653 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:49.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:50.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:50 np0005596060 nova_compute[247421]: 2026-01-26 18:41:50.563 247428 DEBUG nova.virt.libvirt.imagecache [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Jan 26 13:41:50 np0005596060 nova_compute[247421]: 2026-01-26 18:41:50.564 247428 WARNING nova.virt.libvirt.imagecache [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216#033[00m
Jan 26 13:41:50 np0005596060 nova_compute[247421]: 2026-01-26 18:41:50.564 247428 WARNING nova.virt.libvirt.imagecache [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e#033[00m
Jan 26 13:41:50 np0005596060 nova_compute[247421]: 2026-01-26 18:41:50.564 247428 INFO nova.virt.libvirt.imagecache [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Removable base files: /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e#033[00m
Jan 26 13:41:50 np0005596060 nova_compute[247421]: 2026-01-26 18:41:50.565 247428 INFO nova.virt.libvirt.imagecache [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216#033[00m
Jan 26 13:41:50 np0005596060 nova_compute[247421]: 2026-01-26 18:41:50.565 247428 INFO nova.virt.libvirt.imagecache [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/845aad0744c07ae3a06850747475706fc56a381e#033[00m
Jan 26 13:41:50 np0005596060 nova_compute[247421]: 2026-01-26 18:41:50.565 247428 DEBUG nova.virt.libvirt.imagecache [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Jan 26 13:41:50 np0005596060 nova_compute[247421]: 2026-01-26 18:41:50.565 247428 DEBUG nova.virt.libvirt.imagecache [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Jan 26 13:41:50 np0005596060 nova_compute[247421]: 2026-01-26 18:41:50.565 247428 DEBUG nova.virt.libvirt.imagecache [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Jan 26 13:41:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:51 np0005596060 nova_compute[247421]: 2026-01-26 18:41:51.326 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:51.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:52.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:52 np0005596060 nova_compute[247421]: 2026-01-26 18:41:52.913 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:53.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:54.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:55.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:56.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:56 np0005596060 nova_compute[247421]: 2026-01-26 18:41:56.329 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:56 np0005596060 podman[300926]: 2026-01-26 18:41:56.514634512 +0000 UTC m=+0.042035343 container create ca1e93d1f338a4f18fe060454e95227bc0449e59c75430472d6e39084101e6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 13:41:56 np0005596060 systemd[1]: Started libpod-conmon-ca1e93d1f338a4f18fe060454e95227bc0449e59c75430472d6e39084101e6bb.scope.
Jan 26 13:41:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:41:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:41:56 np0005596060 podman[300926]: 2026-01-26 18:41:56.495578845 +0000 UTC m=+0.022979706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:41:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:56 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:41:56 np0005596060 podman[300926]: 2026-01-26 18:41:56.652596264 +0000 UTC m=+0.179997115 container init ca1e93d1f338a4f18fe060454e95227bc0449e59c75430472d6e39084101e6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 13:41:56 np0005596060 podman[300926]: 2026-01-26 18:41:56.661834756 +0000 UTC m=+0.189235587 container start ca1e93d1f338a4f18fe060454e95227bc0449e59c75430472d6e39084101e6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kirch, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:41:56 np0005596060 podman[300926]: 2026-01-26 18:41:56.665431396 +0000 UTC m=+0.192832227 container attach ca1e93d1f338a4f18fe060454e95227bc0449e59c75430472d6e39084101e6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kirch, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:41:56 np0005596060 festive_kirch[300942]: 167 167
Jan 26 13:41:56 np0005596060 systemd[1]: libpod-ca1e93d1f338a4f18fe060454e95227bc0449e59c75430472d6e39084101e6bb.scope: Deactivated successfully.
Jan 26 13:41:56 np0005596060 podman[300926]: 2026-01-26 18:41:56.671153079 +0000 UTC m=+0.198553910 container died ca1e93d1f338a4f18fe060454e95227bc0449e59c75430472d6e39084101e6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 13:41:56 np0005596060 systemd[1]: var-lib-containers-storage-overlay-080ae980da54736f10ded639da0eba5072bd91f322fd59c945d53439bb0f659a-merged.mount: Deactivated successfully.
Jan 26 13:41:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:41:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:41:56 np0005596060 podman[300926]: 2026-01-26 18:41:56.708858692 +0000 UTC m=+0.236259523 container remove ca1e93d1f338a4f18fe060454e95227bc0449e59c75430472d6e39084101e6bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_kirch, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 13:41:56 np0005596060 systemd[1]: libpod-conmon-ca1e93d1f338a4f18fe060454e95227bc0449e59c75430472d6e39084101e6bb.scope: Deactivated successfully.
Jan 26 13:41:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:41:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:56 np0005596060 podman[300965]: 2026-01-26 18:41:56.877594275 +0000 UTC m=+0.041841928 container create fbecd199b3cb0b7391855c2dcbe2f2ed64c64b63c5252907d7c228d66b263f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 26 13:41:56 np0005596060 systemd[1]: Started libpod-conmon-fbecd199b3cb0b7391855c2dcbe2f2ed64c64b63c5252907d7c228d66b263f8e.scope.
Jan 26 13:41:56 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:41:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d2c551d36367733d23382be59e2598ec4153b516c2f6c769541c25db48880e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:41:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d2c551d36367733d23382be59e2598ec4153b516c2f6c769541c25db48880e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:41:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d2c551d36367733d23382be59e2598ec4153b516c2f6c769541c25db48880e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:41:56 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94d2c551d36367733d23382be59e2598ec4153b516c2f6c769541c25db48880e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:41:56 np0005596060 podman[300965]: 2026-01-26 18:41:56.860070576 +0000 UTC m=+0.024318259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:41:56 np0005596060 podman[300965]: 2026-01-26 18:41:56.96653874 +0000 UTC m=+0.130786403 container init fbecd199b3cb0b7391855c2dcbe2f2ed64c64b63c5252907d7c228d66b263f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:41:56 np0005596060 podman[300965]: 2026-01-26 18:41:56.975698539 +0000 UTC m=+0.139946202 container start fbecd199b3cb0b7391855c2dcbe2f2ed64c64b63c5252907d7c228d66b263f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_darwin, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:41:56 np0005596060 podman[300965]: 2026-01-26 18:41:56.978768396 +0000 UTC m=+0.143016079 container attach fbecd199b3cb0b7391855c2dcbe2f2ed64c64b63c5252907d7c228d66b263f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 13:41:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:57 np0005596060 nova_compute[247421]: 2026-01-26 18:41:57.764 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:57 np0005596060 nova_compute[247421]: 2026-01-26 18:41:57.767 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:57.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:57 np0005596060 nova_compute[247421]: 2026-01-26 18:41:57.850 247428 DEBUG nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:41:57 np0005596060 nova_compute[247421]: 2026-01-26 18:41:57.914 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.016 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.017 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.024 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.024 247428 INFO nova.compute.claims [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:41:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:41:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:41:58.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:41:58 np0005596060 competent_darwin[300982]: [
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:    {
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:        "available": false,
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:        "ceph_device": false,
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:        "lsm_data": {},
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:        "lvs": [],
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:        "path": "/dev/sr0",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:        "rejected_reasons": [
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "Insufficient space (<5GB)",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "Has a FileSystem"
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:        ],
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:        "sys_api": {
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "actuators": null,
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "device_nodes": "sr0",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "devname": "sr0",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "human_readable_size": "482.00 KB",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "id_bus": "ata",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "model": "QEMU DVD-ROM",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "nr_requests": "2",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "parent": "/dev/sr0",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "partitions": {},
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "path": "/dev/sr0",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "removable": "1",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "rev": "2.5+",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "ro": "0",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "rotational": "1",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "sas_address": "",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "sas_device_handle": "",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "scheduler_mode": "mq-deadline",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "sectors": 0,
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "sectorsize": "2048",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "size": 493568.0,
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "support_discard": "2048",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "type": "disk",
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:            "vendor": "QEMU"
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:        }
Jan 26 13:41:58 np0005596060 competent_darwin[300982]:    }
Jan 26 13:41:58 np0005596060 competent_darwin[300982]: ]
Jan 26 13:41:58 np0005596060 systemd[1]: libpod-fbecd199b3cb0b7391855c2dcbe2f2ed64c64b63c5252907d7c228d66b263f8e.scope: Deactivated successfully.
Jan 26 13:41:58 np0005596060 systemd[1]: libpod-fbecd199b3cb0b7391855c2dcbe2f2ed64c64b63c5252907d7c228d66b263f8e.scope: Consumed 1.248s CPU time.
Jan 26 13:41:58 np0005596060 podman[300965]: 2026-01-26 18:41:58.202882327 +0000 UTC m=+1.367129990 container died fbecd199b3cb0b7391855c2dcbe2f2ed64c64b63c5252907d7c228d66b263f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_darwin, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.209 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:41:58 np0005596060 systemd[1]: var-lib-containers-storage-overlay-94d2c551d36367733d23382be59e2598ec4153b516c2f6c769541c25db48880e-merged.mount: Deactivated successfully.
Jan 26 13:41:58 np0005596060 podman[300965]: 2026-01-26 18:41:58.255146775 +0000 UTC m=+1.419394428 container remove fbecd199b3cb0b7391855c2dcbe2f2ed64c64b63c5252907d7c228d66b263f8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:41:58 np0005596060 systemd[1]: libpod-conmon-fbecd199b3cb0b7391855c2dcbe2f2ed64c64b63c5252907d7c228d66b263f8e.scope: Deactivated successfully.
Jan 26 13:41:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:41:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:41:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:58 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:58 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:41:58 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4243831676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.637 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.643 247428 DEBUG nova.compute.provider_tree [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.662 247428 DEBUG nova.scheduler.client.report [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.692 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.693 247428 DEBUG nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.802 247428 DEBUG nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.802 247428 DEBUG nova.network.neutron [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.829 247428 INFO nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.863 247428 DEBUG nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:41:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:41:58 np0005596060 nova_compute[247421]: 2026-01-26 18:41:58.999 247428 DEBUG nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.001 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.002 247428 INFO nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Creating image(s)#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.034 247428 DEBUG nova.storage.rbd_utils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.060 247428 DEBUG nova.storage.rbd_utils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.084 247428 DEBUG nova.storage.rbd_utils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.088 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.153 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.154 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.155 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.155 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.179 247428 DEBUG nova.storage.rbd_utils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.183 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.495 247428 DEBUG nova.policy [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ffa1cd7ba9e543f78f2ef48c2a7a67a2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '301bad5c2066428fa7f214024672bf92', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:41:59 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev df529a45-d854-428f-be6d-658d6be044f2 does not exist
Jan 26 13:41:59 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7847126d-c3cd-4041-b6e4-7b6df66edc1e does not exist
Jan 26 13:41:59 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6fa090eb-8deb-4b16-ac2b-5c3303cadc8d does not exist
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.786 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:41:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:41:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:41:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:41:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:41:59.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:41:59 np0005596060 nova_compute[247421]: 2026-01-26 18:41:59.866 247428 DEBUG nova.storage.rbd_utils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] resizing rbd image 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:42:00 np0005596060 nova_compute[247421]: 2026-01-26 18:42:00.021 247428 DEBUG nova.objects.instance [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:00 np0005596060 nova_compute[247421]: 2026-01-26 18:42:00.041 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:42:00 np0005596060 nova_compute[247421]: 2026-01-26 18:42:00.042 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Ensure instance console log exists: /var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:42:00 np0005596060 nova_compute[247421]: 2026-01-26 18:42:00.042 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:00 np0005596060 nova_compute[247421]: 2026-01-26 18:42:00.043 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:00 np0005596060 nova_compute[247421]: 2026-01-26 18:42:00.043 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:00.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:00 np0005596060 podman[302605]: 2026-01-26 18:42:00.342898876 +0000 UTC m=+0.039735115 container create 80842fc9fe798bd066da38e342d2ebd0b9d41d410c7960902b24d89089e0a4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_vaughan, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:42:00 np0005596060 systemd[1]: Started libpod-conmon-80842fc9fe798bd066da38e342d2ebd0b9d41d410c7960902b24d89089e0a4de.scope.
Jan 26 13:42:00 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:42:00 np0005596060 podman[302605]: 2026-01-26 18:42:00.325210323 +0000 UTC m=+0.022046582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:42:00 np0005596060 podman[302605]: 2026-01-26 18:42:00.426614131 +0000 UTC m=+0.123450560 container init 80842fc9fe798bd066da38e342d2ebd0b9d41d410c7960902b24d89089e0a4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:42:00 np0005596060 podman[302605]: 2026-01-26 18:42:00.434147479 +0000 UTC m=+0.130983718 container start 80842fc9fe798bd066da38e342d2ebd0b9d41d410c7960902b24d89089e0a4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_vaughan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:42:00 np0005596060 podman[302605]: 2026-01-26 18:42:00.437337049 +0000 UTC m=+0.134173308 container attach 80842fc9fe798bd066da38e342d2ebd0b9d41d410c7960902b24d89089e0a4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_vaughan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 13:42:00 np0005596060 elastic_vaughan[302621]: 167 167
Jan 26 13:42:00 np0005596060 systemd[1]: libpod-80842fc9fe798bd066da38e342d2ebd0b9d41d410c7960902b24d89089e0a4de.scope: Deactivated successfully.
Jan 26 13:42:00 np0005596060 podman[302605]: 2026-01-26 18:42:00.441062382 +0000 UTC m=+0.137898621 container died 80842fc9fe798bd066da38e342d2ebd0b9d41d410c7960902b24d89089e0a4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 13:42:00 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3f3d33e8c05b653152e73b3135bef68256594384cfd00a4917d43c7a510cae21-merged.mount: Deactivated successfully.
Jan 26 13:42:00 np0005596060 podman[302605]: 2026-01-26 18:42:00.48454648 +0000 UTC m=+0.181382719 container remove 80842fc9fe798bd066da38e342d2ebd0b9d41d410c7960902b24d89089e0a4de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 13:42:00 np0005596060 systemd[1]: libpod-conmon-80842fc9fe798bd066da38e342d2ebd0b9d41d410c7960902b24d89089e0a4de.scope: Deactivated successfully.
Jan 26 13:42:00 np0005596060 podman[302643]: 2026-01-26 18:42:00.652289148 +0000 UTC m=+0.041227593 container create 8988a870724862bd672b17b5ded2692c4a7f819b2d57f7bf0955a61d8f3a5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:42:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:42:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:42:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:42:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:42:00 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:42:00 np0005596060 systemd[1]: Started libpod-conmon-8988a870724862bd672b17b5ded2692c4a7f819b2d57f7bf0955a61d8f3a5c88.scope.
Jan 26 13:42:00 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:42:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4781882f1cb3e749514f2f1296d1e7898600141e7c2ee1ee232cc18936b84946/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4781882f1cb3e749514f2f1296d1e7898600141e7c2ee1ee232cc18936b84946/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4781882f1cb3e749514f2f1296d1e7898600141e7c2ee1ee232cc18936b84946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4781882f1cb3e749514f2f1296d1e7898600141e7c2ee1ee232cc18936b84946/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:00 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4781882f1cb3e749514f2f1296d1e7898600141e7c2ee1ee232cc18936b84946/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:00 np0005596060 podman[302643]: 2026-01-26 18:42:00.634939494 +0000 UTC m=+0.023877969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:42:00 np0005596060 podman[302643]: 2026-01-26 18:42:00.743123741 +0000 UTC m=+0.132062206 container init 8988a870724862bd672b17b5ded2692c4a7f819b2d57f7bf0955a61d8f3a5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:42:00 np0005596060 podman[302643]: 2026-01-26 18:42:00.750534666 +0000 UTC m=+0.139473111 container start 8988a870724862bd672b17b5ded2692c4a7f819b2d57f7bf0955a61d8f3a5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 13:42:00 np0005596060 podman[302643]: 2026-01-26 18:42:00.754028094 +0000 UTC m=+0.142966539 container attach 8988a870724862bd672b17b5ded2692c4a7f819b2d57f7bf0955a61d8f3a5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:42:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 41 MiB data, 381 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:42:01 np0005596060 nova_compute[247421]: 2026-01-26 18:42:01.333 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:01 np0005596060 focused_chandrasekhar[302659]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:42:01 np0005596060 focused_chandrasekhar[302659]: --> relative data size: 1.0
Jan 26 13:42:01 np0005596060 focused_chandrasekhar[302659]: --> All data devices are unavailable
Jan 26 13:42:01 np0005596060 systemd[1]: libpod-8988a870724862bd672b17b5ded2692c4a7f819b2d57f7bf0955a61d8f3a5c88.scope: Deactivated successfully.
Jan 26 13:42:01 np0005596060 podman[302643]: 2026-01-26 18:42:01.578231408 +0000 UTC m=+0.967169853 container died 8988a870724862bd672b17b5ded2692c4a7f819b2d57f7bf0955a61d8f3a5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:42:01 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4781882f1cb3e749514f2f1296d1e7898600141e7c2ee1ee232cc18936b84946-merged.mount: Deactivated successfully.
Jan 26 13:42:01 np0005596060 podman[302643]: 2026-01-26 18:42:01.63785667 +0000 UTC m=+1.026795115 container remove 8988a870724862bd672b17b5ded2692c4a7f819b2d57f7bf0955a61d8f3a5c88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:42:01 np0005596060 systemd[1]: libpod-conmon-8988a870724862bd672b17b5ded2692c4a7f819b2d57f7bf0955a61d8f3a5c88.scope: Deactivated successfully.
Jan 26 13:42:01 np0005596060 podman[302675]: 2026-01-26 18:42:01.680257801 +0000 UTC m=+0.071544161 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 26 13:42:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:01 np0005596060 podman[302678]: 2026-01-26 18:42:01.708376214 +0000 UTC m=+0.099623343 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 13:42:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:01.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:02.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:02 np0005596060 podman[302877]: 2026-01-26 18:42:02.282620323 +0000 UTC m=+0.071344786 container create 2f85cbd5d4397dea73ccf1aa7f2b3d9e93e1b5b1a0ee35e76c12bcf2651986f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:42:02 np0005596060 systemd[1]: Started libpod-conmon-2f85cbd5d4397dea73ccf1aa7f2b3d9e93e1b5b1a0ee35e76c12bcf2651986f5.scope.
Jan 26 13:42:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:42:02 np0005596060 podman[302877]: 2026-01-26 18:42:02.260800817 +0000 UTC m=+0.049525310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:42:02 np0005596060 podman[302877]: 2026-01-26 18:42:02.372120462 +0000 UTC m=+0.160844945 container init 2f85cbd5d4397dea73ccf1aa7f2b3d9e93e1b5b1a0ee35e76c12bcf2651986f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:42:02 np0005596060 podman[302877]: 2026-01-26 18:42:02.379041616 +0000 UTC m=+0.167766079 container start 2f85cbd5d4397dea73ccf1aa7f2b3d9e93e1b5b1a0ee35e76c12bcf2651986f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 13:42:02 np0005596060 podman[302877]: 2026-01-26 18:42:02.382269166 +0000 UTC m=+0.170993649 container attach 2f85cbd5d4397dea73ccf1aa7f2b3d9e93e1b5b1a0ee35e76c12bcf2651986f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:42:02 np0005596060 inspiring_chandrasekhar[302894]: 167 167
Jan 26 13:42:02 np0005596060 systemd[1]: libpod-2f85cbd5d4397dea73ccf1aa7f2b3d9e93e1b5b1a0ee35e76c12bcf2651986f5.scope: Deactivated successfully.
Jan 26 13:42:02 np0005596060 podman[302877]: 2026-01-26 18:42:02.38602935 +0000 UTC m=+0.174753833 container died 2f85cbd5d4397dea73ccf1aa7f2b3d9e93e1b5b1a0ee35e76c12bcf2651986f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:42:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8c6793f542ddcecf53247e41adfcaf09888279bc35f9f982b9cc5c4974e696e3-merged.mount: Deactivated successfully.
Jan 26 13:42:02 np0005596060 podman[302877]: 2026-01-26 18:42:02.420238776 +0000 UTC m=+0.208963239 container remove 2f85cbd5d4397dea73ccf1aa7f2b3d9e93e1b5b1a0ee35e76c12bcf2651986f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 13:42:02 np0005596060 systemd[1]: libpod-conmon-2f85cbd5d4397dea73ccf1aa7f2b3d9e93e1b5b1a0ee35e76c12bcf2651986f5.scope: Deactivated successfully.
Jan 26 13:42:02 np0005596060 podman[302918]: 2026-01-26 18:42:02.610367094 +0000 UTC m=+0.060631788 container create d7591d04c710a3e99860ee147c0edd57ecdb9e07e76b0835daa613c8a6158548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldwasser, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:42:02 np0005596060 systemd[1]: Started libpod-conmon-d7591d04c710a3e99860ee147c0edd57ecdb9e07e76b0835daa613c8a6158548.scope.
Jan 26 13:42:02 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:42:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17e2778c5b1e5f07cd2a66846189bd2cc85716e9ffb7ec108a346abf2b4381f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17e2778c5b1e5f07cd2a66846189bd2cc85716e9ffb7ec108a346abf2b4381f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17e2778c5b1e5f07cd2a66846189bd2cc85716e9ffb7ec108a346abf2b4381f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:02 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17e2778c5b1e5f07cd2a66846189bd2cc85716e9ffb7ec108a346abf2b4381f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:02 np0005596060 podman[302918]: 2026-01-26 18:42:02.679097854 +0000 UTC m=+0.129362578 container init d7591d04c710a3e99860ee147c0edd57ecdb9e07e76b0835daa613c8a6158548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldwasser, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:42:02 np0005596060 podman[302918]: 2026-01-26 18:42:02.587230475 +0000 UTC m=+0.037495209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:42:02 np0005596060 podman[302918]: 2026-01-26 18:42:02.686914629 +0000 UTC m=+0.137179323 container start d7591d04c710a3e99860ee147c0edd57ecdb9e07e76b0835daa613c8a6158548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:42:02 np0005596060 podman[302918]: 2026-01-26 18:42:02.690843048 +0000 UTC m=+0.141107762 container attach d7591d04c710a3e99860ee147c0edd57ecdb9e07e76b0835daa613c8a6158548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldwasser, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:42:02 np0005596060 nova_compute[247421]: 2026-01-26 18:42:02.690 247428 DEBUG nova.network.neutron [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Successfully created port: a76d9016-429e-486e-9688-7ceb79a8fbc5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:42:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:42:02 np0005596060 nova_compute[247421]: 2026-01-26 18:42:02.917 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]: {
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:    "1": [
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:        {
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "devices": [
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "/dev/loop3"
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            ],
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "lv_name": "ceph_lv0",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "lv_size": "7511998464",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "name": "ceph_lv0",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "tags": {
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.cluster_name": "ceph",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.crush_device_class": "",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.encrypted": "0",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.osd_id": "1",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.type": "block",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:                "ceph.vdo": "0"
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            },
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "type": "block",
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:            "vg_name": "ceph_vg0"
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:        }
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]:    ]
Jan 26 13:42:03 np0005596060 crazy_goldwasser[302935]: }
Jan 26 13:42:03 np0005596060 podman[302918]: 2026-01-26 18:42:03.436312892 +0000 UTC m=+0.886577586 container died d7591d04c710a3e99860ee147c0edd57ecdb9e07e76b0835daa613c8a6158548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 13:42:03 np0005596060 systemd[1]: libpod-d7591d04c710a3e99860ee147c0edd57ecdb9e07e76b0835daa613c8a6158548.scope: Deactivated successfully.
Jan 26 13:42:03 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b17e2778c5b1e5f07cd2a66846189bd2cc85716e9ffb7ec108a346abf2b4381f-merged.mount: Deactivated successfully.
Jan 26 13:42:03 np0005596060 podman[302918]: 2026-01-26 18:42:03.487023881 +0000 UTC m=+0.937288575 container remove d7591d04c710a3e99860ee147c0edd57ecdb9e07e76b0835daa613c8a6158548 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldwasser, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 13:42:03 np0005596060 systemd[1]: libpod-conmon-d7591d04c710a3e99860ee147c0edd57ecdb9e07e76b0835daa613c8a6158548.scope: Deactivated successfully.
Jan 26 13:42:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:03.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:42:04 np0005596060 podman[303149]: 2026-01-26 18:42:04.057967707 +0000 UTC m=+0.041325525 container create 20a0f98f92079f4e03283962edfb8cb88da5b215553b46eb14cc97b5b7ac803b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:42:04 np0005596060 systemd[1]: Started libpod-conmon-20a0f98f92079f4e03283962edfb8cb88da5b215553b46eb14cc97b5b7ac803b.scope.
Jan 26 13:42:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:04.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:04 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:42:04 np0005596060 podman[303149]: 2026-01-26 18:42:04.040582542 +0000 UTC m=+0.023940380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:42:04 np0005596060 podman[303149]: 2026-01-26 18:42:04.138428861 +0000 UTC m=+0.121786709 container init 20a0f98f92079f4e03283962edfb8cb88da5b215553b46eb14cc97b5b7ac803b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:42:04 np0005596060 podman[303149]: 2026-01-26 18:42:04.146929013 +0000 UTC m=+0.130286821 container start 20a0f98f92079f4e03283962edfb8cb88da5b215553b46eb14cc97b5b7ac803b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:42:04 np0005596060 agitated_lovelace[303166]: 167 167
Jan 26 13:42:04 np0005596060 podman[303149]: 2026-01-26 18:42:04.150199875 +0000 UTC m=+0.133557713 container attach 20a0f98f92079f4e03283962edfb8cb88da5b215553b46eb14cc97b5b7ac803b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:42:04 np0005596060 systemd[1]: libpod-20a0f98f92079f4e03283962edfb8cb88da5b215553b46eb14cc97b5b7ac803b.scope: Deactivated successfully.
Jan 26 13:42:04 np0005596060 podman[303149]: 2026-01-26 18:42:04.151660152 +0000 UTC m=+0.135017980 container died 20a0f98f92079f4e03283962edfb8cb88da5b215553b46eb14cc97b5b7ac803b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:42:04 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b0ab984ed4207182ef423bf3720821edfb286e41fad46822272ffd2f58b2c349-merged.mount: Deactivated successfully.
Jan 26 13:42:04 np0005596060 podman[303149]: 2026-01-26 18:42:04.188406571 +0000 UTC m=+0.171764419 container remove 20a0f98f92079f4e03283962edfb8cb88da5b215553b46eb14cc97b5b7ac803b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lovelace, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:42:04 np0005596060 systemd[1]: libpod-conmon-20a0f98f92079f4e03283962edfb8cb88da5b215553b46eb14cc97b5b7ac803b.scope: Deactivated successfully.
Jan 26 13:42:04 np0005596060 podman[303191]: 2026-01-26 18:42:04.355214765 +0000 UTC m=+0.050028572 container create 45245367761ae452cbea5264c571236b250a9f774a6b1edbd03cab23c0eb48e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:42:04 np0005596060 systemd[1]: Started libpod-conmon-45245367761ae452cbea5264c571236b250a9f774a6b1edbd03cab23c0eb48e3.scope.
Jan 26 13:42:04 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:42:04 np0005596060 podman[303191]: 2026-01-26 18:42:04.336441106 +0000 UTC m=+0.031254893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:42:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3857f3db3416e42bff3f8cda8983f9e1fd6c1cd6434c263bed0ca29f6758a54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3857f3db3416e42bff3f8cda8983f9e1fd6c1cd6434c263bed0ca29f6758a54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3857f3db3416e42bff3f8cda8983f9e1fd6c1cd6434c263bed0ca29f6758a54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:04 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3857f3db3416e42bff3f8cda8983f9e1fd6c1cd6434c263bed0ca29f6758a54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:04 np0005596060 podman[303191]: 2026-01-26 18:42:04.443168526 +0000 UTC m=+0.137982293 container init 45245367761ae452cbea5264c571236b250a9f774a6b1edbd03cab23c0eb48e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 26 13:42:04 np0005596060 podman[303191]: 2026-01-26 18:42:04.450741676 +0000 UTC m=+0.145555433 container start 45245367761ae452cbea5264c571236b250a9f774a6b1edbd03cab23c0eb48e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 26 13:42:04 np0005596060 podman[303191]: 2026-01-26 18:42:04.453646468 +0000 UTC m=+0.148460255 container attach 45245367761ae452cbea5264c571236b250a9f774a6b1edbd03cab23c0eb48e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:42:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:42:05 np0005596060 wonderful_wing[303207]: {
Jan 26 13:42:05 np0005596060 wonderful_wing[303207]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:42:05 np0005596060 wonderful_wing[303207]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:42:05 np0005596060 wonderful_wing[303207]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:42:05 np0005596060 wonderful_wing[303207]:        "osd_id": 1,
Jan 26 13:42:05 np0005596060 wonderful_wing[303207]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:42:05 np0005596060 wonderful_wing[303207]:        "type": "bluestore"
Jan 26 13:42:05 np0005596060 wonderful_wing[303207]:    }
Jan 26 13:42:05 np0005596060 wonderful_wing[303207]: }
Jan 26 13:42:05 np0005596060 systemd[1]: libpod-45245367761ae452cbea5264c571236b250a9f774a6b1edbd03cab23c0eb48e3.scope: Deactivated successfully.
Jan 26 13:42:05 np0005596060 podman[303191]: 2026-01-26 18:42:05.322287224 +0000 UTC m=+1.017101011 container died 45245367761ae452cbea5264c571236b250a9f774a6b1edbd03cab23c0eb48e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:42:05 np0005596060 systemd[1]: var-lib-containers-storage-overlay-a3857f3db3416e42bff3f8cda8983f9e1fd6c1cd6434c263bed0ca29f6758a54-merged.mount: Deactivated successfully.
Jan 26 13:42:05 np0005596060 podman[303191]: 2026-01-26 18:42:05.379216549 +0000 UTC m=+1.074030316 container remove 45245367761ae452cbea5264c571236b250a9f774a6b1edbd03cab23c0eb48e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 13:42:05 np0005596060 systemd[1]: libpod-conmon-45245367761ae452cbea5264c571236b250a9f774a6b1edbd03cab23c0eb48e3.scope: Deactivated successfully.
Jan 26 13:42:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:42:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:42:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:42:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:42:05 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev c8a1f71f-260a-4cf0-ad5e-c1683ab8a28b does not exist
Jan 26 13:42:05 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 2b61c7c5-95a8-4894-b92a-bdf233108a53 does not exist
Jan 26 13:42:05 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 748afeab-05e2-4ebf-91df-5f43684ed968 does not exist
Jan 26 13:42:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:05.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:06.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:06 np0005596060 nova_compute[247421]: 2026-01-26 18:42:06.157 247428 DEBUG nova.network.neutron [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Successfully updated port: a76d9016-429e-486e-9688-7ceb79a8fbc5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:42:06 np0005596060 nova_compute[247421]: 2026-01-26 18:42:06.337 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:06 np0005596060 nova_compute[247421]: 2026-01-26 18:42:06.410 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:42:06 np0005596060 nova_compute[247421]: 2026-01-26 18:42:06.410 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquired lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:42:06 np0005596060 nova_compute[247421]: 2026-01-26 18:42:06.410 247428 DEBUG nova.network.neutron [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:42:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:42:06 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:42:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:42:06 np0005596060 nova_compute[247421]: 2026-01-26 18:42:06.919 247428 DEBUG nova.compute.manager [req-f0a2e948-5abc-4f69-b9bf-6c2351ddfbc5 req-cdba4346-2d64-47b8-a3ed-5128c2fcf190 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-changed-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:42:06 np0005596060 nova_compute[247421]: 2026-01-26 18:42:06.919 247428 DEBUG nova.compute.manager [req-f0a2e948-5abc-4f69-b9bf-6c2351ddfbc5 req-cdba4346-2d64-47b8-a3ed-5128c2fcf190 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Refreshing instance network info cache due to event network-changed-a76d9016-429e-486e-9688-7ceb79a8fbc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:42:06 np0005596060 nova_compute[247421]: 2026-01-26 18:42:06.920 247428 DEBUG oslo_concurrency.lockutils [req-f0a2e948-5abc-4f69-b9bf-6c2351ddfbc5 req-cdba4346-2d64-47b8-a3ed-5128c2fcf190 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:42:07 np0005596060 nova_compute[247421]: 2026-01-26 18:42:07.495 247428 DEBUG nova.network.neutron [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:42:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:07.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:07 np0005596060 nova_compute[247421]: 2026-01-26 18:42:07.919 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:08.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:42:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:09Z|00175|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 26 13:42:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:42:10 np0005596060 nova_compute[247421]: 2026-01-26 18:42:10.969 247428 DEBUG nova.network.neutron [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updating instance_info_cache with network_info: [{"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.084 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Releasing lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.084 247428 DEBUG nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Instance network_info: |[{"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.084 247428 DEBUG oslo_concurrency.lockutils [req-f0a2e948-5abc-4f69-b9bf-6c2351ddfbc5 req-cdba4346-2d64-47b8-a3ed-5128c2fcf190 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.085 247428 DEBUG nova.network.neutron [req-f0a2e948-5abc-4f69-b9bf-6c2351ddfbc5 req-cdba4346-2d64-47b8-a3ed-5128c2fcf190 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Refreshing network info cache for port a76d9016-429e-486e-9688-7ceb79a8fbc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.087 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Start _get_guest_xml network_info=[{"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.091 247428 WARNING nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.105 247428 DEBUG nova.virt.libvirt.host [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.106 247428 DEBUG nova.virt.libvirt.host [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.120 247428 DEBUG nova.virt.libvirt.host [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.120 247428 DEBUG nova.virt.libvirt.host [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.122 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.122 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.122 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.122 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.123 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.123 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.123 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.123 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.123 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.123 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.124 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.124 247428 DEBUG nova.virt.hardware [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.126 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.339 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:42:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3636786805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.570 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.602 247428 DEBUG nova.storage.rbd_utils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:42:11 np0005596060 nova_compute[247421]: 2026-01-26 18:42:11.607 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:42:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:11.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:42:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/623966280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.063 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.066 247428 DEBUG nova.virt.libvirt.vif [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1870761727',display_name='tempest-TestNetworkAdvancedServerOps-server-1870761727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1870761727',id=28,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhf8yCneBiri1NAWyBA0ya0pyyYSQJ1a9HF6KVwoI/Pve/OQeuQ4yJEGv4aAQjY92iHdUS2CnnT1UTHksLJvf4vYPD+3UTTgsTTJA6SiRoW+zUAoxAoX7Qe2Gdgl++cJQ==',key_name='tempest-TestNetworkAdvancedServerOps-1931875589',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-u8yf0pcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:41:58Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.066 247428 DEBUG nova.network.os_vif_util [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.067 247428 DEBUG nova.network.os_vif_util [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.069 247428 DEBUG nova.objects.instance [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:12.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.143 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <uuid>2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa</uuid>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <name>instance-0000001c</name>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1870761727</nova:name>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:42:11</nova:creationTime>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <nova:user uuid="ffa1cd7ba9e543f78f2ef48c2a7a67a2">tempest-TestNetworkAdvancedServerOps-1357272614-project-member</nova:user>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <nova:project uuid="301bad5c2066428fa7f214024672bf92">tempest-TestNetworkAdvancedServerOps-1357272614</nova:project>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <nova:port uuid="a76d9016-429e-486e-9688-7ceb79a8fbc5">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <entry name="serial">2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa</entry>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <entry name="uuid">2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa</entry>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk.config">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:37:f2:1e"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <target dev="tapa76d9016-42"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa/console.log" append="off"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:42:12 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:42:12 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:42:12 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:42:12 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.144 247428 DEBUG nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Preparing to wait for external event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.144 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.145 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.145 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.145 247428 DEBUG nova.virt.libvirt.vif [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1870761727',display_name='tempest-TestNetworkAdvancedServerOps-server-1870761727',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1870761727',id=28,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhf8yCneBiri1NAWyBA0ya0pyyYSQJ1a9HF6KVwoI/Pve/OQeuQ4yJEGv4aAQjY92iHdUS2CnnT1UTHksLJvf4vYPD+3UTTgsTTJA6SiRoW+zUAoxAoX7Qe2Gdgl++cJQ==',key_name='tempest-TestNetworkAdvancedServerOps-1931875589',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-u8yf0pcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:41:58Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.146 247428 DEBUG nova.network.os_vif_util [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.146 247428 DEBUG nova.network.os_vif_util [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.147 247428 DEBUG os_vif [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.147 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.148 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.148 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.152 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.152 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa76d9016-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.152 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa76d9016-42, col_values=(('external_ids', {'iface-id': 'a76d9016-429e-486e-9688-7ceb79a8fbc5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:f2:1e', 'vm-uuid': '2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.154 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:12 np0005596060 NetworkManager[48900]: <info>  [1769452932.1550] manager: (tapa76d9016-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.156 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.161 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.161 247428 INFO os_vif [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42')#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.284 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.285 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.285 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] No VIF found with MAC fa:16:3e:37:f2:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.286 247428 INFO nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Using config drive#033[00m
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.314 247428 DEBUG nova.storage.rbd_utils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:42:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:42:12 np0005596060 nova_compute[247421]: 2026-01-26 18:42:12.919 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:13 np0005596060 nova_compute[247421]: 2026-01-26 18:42:13.482 247428 INFO nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Creating config drive at /var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa/disk.config#033[00m
Jan 26 13:42:13 np0005596060 nova_compute[247421]: 2026-01-26 18:42:13.486 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphjl23wly execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:42:13 np0005596060 nova_compute[247421]: 2026-01-26 18:42:13.620 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphjl23wly" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:42:13 np0005596060 nova_compute[247421]: 2026-01-26 18:42:13.651 247428 DEBUG nova.storage.rbd_utils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] rbd image 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:42:13 np0005596060 nova_compute[247421]: 2026-01-26 18:42:13.654 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa/disk.config 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:42:13 np0005596060 nova_compute[247421]: 2026-01-26 18:42:13.826 247428 DEBUG oslo_concurrency.processutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa/disk.config 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:42:13 np0005596060 nova_compute[247421]: 2026-01-26 18:42:13.827 247428 INFO nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Deleting local config drive /var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa/disk.config because it was imported into RBD.#033[00m
Jan 26 13:42:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:13.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:13 np0005596060 kernel: tapa76d9016-42: entered promiscuous mode
Jan 26 13:42:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:13Z|00176|binding|INFO|Claiming lport a76d9016-429e-486e-9688-7ceb79a8fbc5 for this chassis.
Jan 26 13:42:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:13Z|00177|binding|INFO|a76d9016-429e-486e-9688-7ceb79a8fbc5: Claiming fa:16:3e:37:f2:1e 10.100.0.6
Jan 26 13:42:13 np0005596060 nova_compute[247421]: 2026-01-26 18:42:13.885 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:13 np0005596060 NetworkManager[48900]: <info>  [1769452933.8874] manager: (tapa76d9016-42): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Jan 26 13:42:13 np0005596060 nova_compute[247421]: 2026-01-26 18:42:13.888 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:13 np0005596060 systemd-udevd[303428]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:42:13 np0005596060 systemd-machined[213879]: New machine qemu-15-instance-0000001c.
Jan 26 13:42:13 np0005596060 NetworkManager[48900]: <info>  [1769452933.9262] device (tapa76d9016-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:42:13 np0005596060 NetworkManager[48900]: <info>  [1769452933.9272] device (tapa76d9016-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:42:13 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:13Z|00178|binding|INFO|Setting lport a76d9016-429e-486e-9688-7ceb79a8fbc5 ovn-installed in OVS
Jan 26 13:42:13 np0005596060 systemd[1]: Started Virtual Machine qemu-15-instance-0000001c.
Jan 26 13:42:13 np0005596060 nova_compute[247421]: 2026-01-26 18:42:13.958 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:42:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:42:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:14.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:14 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:14Z|00179|binding|INFO|Setting lport a76d9016-429e-486e-9688-7ceb79a8fbc5 up in Southbound
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.272 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:f2:1e 10.100.0.6'], port_security=['fa:16:3e:37:f2:1e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9116262-f922-4c30-b270-06114ade6067', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd646169-00bc-4f72-a516-e4fe4f18150a, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=a76d9016-429e-486e-9688-7ceb79a8fbc5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.273 159331 INFO neutron.agent.ovn.metadata.agent [-] Port a76d9016-429e-486e-9688-7ceb79a8fbc5 in datapath 74d216bf-0dc0-4b43-8bc3-cb7617fae49c bound to our chassis#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.273 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74d216bf-0dc0-4b43-8bc3-cb7617fae49c#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.286 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[cef06be6-6096-40e6-8216-e1f4048b5a14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.287 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74d216bf-01 in ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.289 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74d216bf-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.289 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[70315542-046e-4e33-ba05-7c12830ff4be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.290 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b335d0bb-4a36-45fd-ba83-cbb2b7df0403]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.303 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[e539fd57-95e6-4a85-886c-88228f26e993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.321 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[6d05195a-759e-4bb9-9121-db70226da62a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.352 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[996a825f-73f2-4776-b6d8-71911e9e5b59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.358 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[386c8bca-c8c1-4d90-b8b0-19ec970097f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 NetworkManager[48900]: <info>  [1769452934.3597] manager: (tap74d216bf-00): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Jan 26 13:42:14 np0005596060 systemd-udevd[303431]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.387 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[f9d9b5f1-ad8a-4b10-8667-9ab4752ffe5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.390 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[b938184f-378c-4fbd-b9cb-cf1b4db19371]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 NetworkManager[48900]: <info>  [1769452934.4182] device (tap74d216bf-00): carrier: link connected
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.427 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[223b3c4d-8f54-4543-ab2a-ddbb2214dd3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.446 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[8d88523a-6674-4d51-94bb-4d592e0dade4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74d216bf-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:8f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669900, 'reachable_time': 38576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303505, 'error': None, 'target': 'ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.466 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[35b57d7c-b90a-4c78-9507-ee8a9235c32e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:8fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 669900, 'tstamp': 669900}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303506, 'error': None, 'target': 'ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.490 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[951cd474-ad64-431b-8836-cfd0b7cdd754]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74d216bf-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:8f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 52], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669900, 'reachable_time': 38576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303508, 'error': None, 'target': 'ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 nova_compute[247421]: 2026-01-26 18:42:14.497 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452934.4963944, 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:42:14 np0005596060 nova_compute[247421]: 2026-01-26 18:42:14.497 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] VM Started (Lifecycle Event)#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.521 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b92a2283-9546-429a-a361-384951408181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.583 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ac303968-9d26-4612-aa83-f04b1bcc26d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.585 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74d216bf-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.585 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.586 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74d216bf-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:14 np0005596060 NetworkManager[48900]: <info>  [1769452934.5882] manager: (tap74d216bf-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Jan 26 13:42:14 np0005596060 kernel: tap74d216bf-00: entered promiscuous mode
Jan 26 13:42:14 np0005596060 nova_compute[247421]: 2026-01-26 18:42:14.589 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.590 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74d216bf-00, col_values=(('external_ids', {'iface-id': 'a263604e-c7db-4e16-8984-a7c390c70d2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:14 np0005596060 nova_compute[247421]: 2026-01-26 18:42:14.592 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:14 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:14Z|00180|binding|INFO|Releasing lport a263604e-c7db-4e16-8984-a7c390c70d2d from this chassis (sb_readonly=0)
Jan 26 13:42:14 np0005596060 nova_compute[247421]: 2026-01-26 18:42:14.605 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.605 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74d216bf-0dc0-4b43-8bc3-cb7617fae49c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74d216bf-0dc0-4b43-8bc3-cb7617fae49c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.606 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[bb59060b-3350-42e3-a40c-186294567319]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.607 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-74d216bf-0dc0-4b43-8bc3-cb7617fae49c
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/74d216bf-0dc0-4b43-8bc3-cb7617fae49c.pid.haproxy
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 74d216bf-0dc0-4b43-8bc3-cb7617fae49c
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.607 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'env', 'PROCESS_TAG=haproxy-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74d216bf-0dc0-4b43-8bc3-cb7617fae49c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.768 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.769 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:14.769 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:42:14 np0005596060 nova_compute[247421]: 2026-01-26 18:42:14.944 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:42:14 np0005596060 nova_compute[247421]: 2026-01-26 18:42:14.950 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452934.4977949, 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:42:14 np0005596060 nova_compute[247421]: 2026-01-26 18:42:14.950 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:42:14 np0005596060 podman[303540]: 2026-01-26 18:42:14.965992515 +0000 UTC m=+0.054975276 container create 440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 13:42:14 np0005596060 nova_compute[247421]: 2026-01-26 18:42:14.975 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:42:14 np0005596060 nova_compute[247421]: 2026-01-26 18:42:14.977 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:42:15 np0005596060 systemd[1]: Started libpod-conmon-440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77.scope.
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.018 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:42:15 np0005596060 podman[303540]: 2026-01-26 18:42:14.934150319 +0000 UTC m=+0.023133070 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:42:15 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:42:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11311dd4114c8a332db69cb0232695a9eb058b7f81782e19acce17581724be54/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:15 np0005596060 podman[303540]: 2026-01-26 18:42:15.054715595 +0000 UTC m=+0.143698346 container init 440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:42:15 np0005596060 podman[303540]: 2026-01-26 18:42:15.059972957 +0000 UTC m=+0.148955688 container start 440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 13:42:15 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[303555]: [NOTICE]   (303559) : New worker (303561) forked
Jan 26 13:42:15 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[303555]: [NOTICE]   (303559) : Loading success.
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.710 247428 DEBUG nova.compute.manager [req-a2c281ae-9b98-45df-b0af-dc8ade05ee68 req-40fefb7c-0e94-4832-ab61-5660075e97ac 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.710 247428 DEBUG oslo_concurrency.lockutils [req-a2c281ae-9b98-45df-b0af-dc8ade05ee68 req-40fefb7c-0e94-4832-ab61-5660075e97ac 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.710 247428 DEBUG oslo_concurrency.lockutils [req-a2c281ae-9b98-45df-b0af-dc8ade05ee68 req-40fefb7c-0e94-4832-ab61-5660075e97ac 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.711 247428 DEBUG oslo_concurrency.lockutils [req-a2c281ae-9b98-45df-b0af-dc8ade05ee68 req-40fefb7c-0e94-4832-ab61-5660075e97ac 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.711 247428 DEBUG nova.compute.manager [req-a2c281ae-9b98-45df-b0af-dc8ade05ee68 req-40fefb7c-0e94-4832-ab61-5660075e97ac 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Processing event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.711 247428 DEBUG nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.716 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.717 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452935.716545, 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.717 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.721 247428 INFO nova.virt.libvirt.driver [-] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Instance spawned successfully.#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.721 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.743 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.750 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.753 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.754 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.754 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.754 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.755 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.755 247428 DEBUG nova.virt.libvirt.driver [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.799 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.838 247428 INFO nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Took 16.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.839 247428 DEBUG nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:42:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:15.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.916 247428 INFO nova.compute.manager [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Took 17.95 seconds to build instance.#033[00m
Jan 26 13:42:15 np0005596060 nova_compute[247421]: 2026-01-26 18:42:15.938 247428 DEBUG oslo_concurrency.lockutils [None req-b962efa8-4015-407d-9d43-f1e4f36ea6c3 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:16.093 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:42:16 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:16.094 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:42:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:16.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:16 np0005596060 nova_compute[247421]: 2026-01-26 18:42:16.135 247428 DEBUG nova.network.neutron [req-f0a2e948-5abc-4f69-b9bf-6c2351ddfbc5 req-cdba4346-2d64-47b8-a3ed-5128c2fcf190 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updated VIF entry in instance network info cache for port a76d9016-429e-486e-9688-7ceb79a8fbc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:42:16 np0005596060 nova_compute[247421]: 2026-01-26 18:42:16.136 247428 DEBUG nova.network.neutron [req-f0a2e948-5abc-4f69-b9bf-6c2351ddfbc5 req-cdba4346-2d64-47b8-a3ed-5128c2fcf190 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updating instance_info_cache with network_info: [{"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:42:16 np0005596060 nova_compute[247421]: 2026-01-26 18:42:16.137 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:16 np0005596060 nova_compute[247421]: 2026-01-26 18:42:16.171 247428 DEBUG oslo_concurrency.lockutils [req-f0a2e948-5abc-4f69-b9bf-6c2351ddfbc5 req-cdba4346-2d64-47b8-a3ed-5128c2fcf190 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:42:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 5.8 KiB/s rd, 0 op/s
Jan 26 13:42:17 np0005596060 nova_compute[247421]: 2026-01-26 18:42:17.154 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:17.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:17 np0005596060 nova_compute[247421]: 2026-01-26 18:42:17.920 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:18 np0005596060 nova_compute[247421]: 2026-01-26 18:42:18.035 247428 DEBUG nova.compute.manager [req-9c44fc89-a7c9-4803-aa1c-42831db4ef61 req-d894ccaa-fb5a-4d06-95b8-ede533675191 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:42:18 np0005596060 nova_compute[247421]: 2026-01-26 18:42:18.035 247428 DEBUG oslo_concurrency.lockutils [req-9c44fc89-a7c9-4803-aa1c-42831db4ef61 req-d894ccaa-fb5a-4d06-95b8-ede533675191 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:18 np0005596060 nova_compute[247421]: 2026-01-26 18:42:18.036 247428 DEBUG oslo_concurrency.lockutils [req-9c44fc89-a7c9-4803-aa1c-42831db4ef61 req-d894ccaa-fb5a-4d06-95b8-ede533675191 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:18 np0005596060 nova_compute[247421]: 2026-01-26 18:42:18.036 247428 DEBUG oslo_concurrency.lockutils [req-9c44fc89-a7c9-4803-aa1c-42831db4ef61 req-d894ccaa-fb5a-4d06-95b8-ede533675191 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:18 np0005596060 nova_compute[247421]: 2026-01-26 18:42:18.036 247428 DEBUG nova.compute.manager [req-9c44fc89-a7c9-4803-aa1c-42831db4ef61 req-d894ccaa-fb5a-4d06-95b8-ede533675191 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] No waiting events found dispatching network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:42:18 np0005596060 nova_compute[247421]: 2026-01-26 18:42:18.036 247428 WARNING nova.compute.manager [req-9c44fc89-a7c9-4803-aa1c-42831db4ef61 req-d894ccaa-fb5a-4d06-95b8-ede533675191 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received unexpected event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 for instance with vm_state active and task_state None.#033[00m
Jan 26 13:42:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:18.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 697 KiB/s rd, 12 KiB/s wr, 33 op/s
Jan 26 13:42:19 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:19.096 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:19.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:20.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:20 np0005596060 NetworkManager[48900]: <info>  [1769452940.6286] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Jan 26 13:42:20 np0005596060 NetworkManager[48900]: <info>  [1769452940.6293] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/97)
Jan 26 13:42:20 np0005596060 nova_compute[247421]: 2026-01-26 18:42:20.628 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:20 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:20Z|00181|binding|INFO|Releasing lport a263604e-c7db-4e16-8984-a7c390c70d2d from this chassis (sb_readonly=0)
Jan 26 13:42:20 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:20Z|00182|binding|INFO|Releasing lport a263604e-c7db-4e16-8984-a7c390c70d2d from this chassis (sb_readonly=0)
Jan 26 13:42:20 np0005596060 nova_compute[247421]: 2026-01-26 18:42:20.660 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:20 np0005596060 nova_compute[247421]: 2026-01-26 18:42:20.675 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 697 KiB/s rd, 12 KiB/s wr, 33 op/s
Jan 26 13:42:21 np0005596060 nova_compute[247421]: 2026-01-26 18:42:21.565 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:21 np0005596060 nova_compute[247421]: 2026-01-26 18:42:21.775 247428 DEBUG nova.compute.manager [req-383aaef3-4ce8-4021-892a-480d4eda6cbd req-6778221b-162a-4aa5-9fe8-cd75c074c3f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-changed-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:42:21 np0005596060 nova_compute[247421]: 2026-01-26 18:42:21.776 247428 DEBUG nova.compute.manager [req-383aaef3-4ce8-4021-892a-480d4eda6cbd req-6778221b-162a-4aa5-9fe8-cd75c074c3f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Refreshing instance network info cache due to event network-changed-a76d9016-429e-486e-9688-7ceb79a8fbc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:42:21 np0005596060 nova_compute[247421]: 2026-01-26 18:42:21.776 247428 DEBUG oslo_concurrency.lockutils [req-383aaef3-4ce8-4021-892a-480d4eda6cbd req-6778221b-162a-4aa5-9fe8-cd75c074c3f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:42:21 np0005596060 nova_compute[247421]: 2026-01-26 18:42:21.776 247428 DEBUG oslo_concurrency.lockutils [req-383aaef3-4ce8-4021-892a-480d4eda6cbd req-6778221b-162a-4aa5-9fe8-cd75c074c3f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:42:21 np0005596060 nova_compute[247421]: 2026-01-26 18:42:21.776 247428 DEBUG nova.network.neutron [req-383aaef3-4ce8-4021-892a-480d4eda6cbd req-6778221b-162a-4aa5-9fe8-cd75c074c3f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Refreshing network info cache for port a76d9016-429e-486e-9688-7ceb79a8fbc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:42:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:22.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:22 np0005596060 nova_compute[247421]: 2026-01-26 18:42:22.157 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:42:22 np0005596060 nova_compute[247421]: 2026-01-26 18:42:22.922 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:23 np0005596060 nova_compute[247421]: 2026-01-26 18:42:23.496 247428 DEBUG nova.network.neutron [req-383aaef3-4ce8-4021-892a-480d4eda6cbd req-6778221b-162a-4aa5-9fe8-cd75c074c3f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updated VIF entry in instance network info cache for port a76d9016-429e-486e-9688-7ceb79a8fbc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:42:23 np0005596060 nova_compute[247421]: 2026-01-26 18:42:23.497 247428 DEBUG nova.network.neutron [req-383aaef3-4ce8-4021-892a-480d4eda6cbd req-6778221b-162a-4aa5-9fe8-cd75c074c3f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updating instance_info_cache with network_info: [{"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:42:23 np0005596060 nova_compute[247421]: 2026-01-26 18:42:23.638 247428 DEBUG oslo_concurrency.lockutils [req-383aaef3-4ce8-4021-892a-480d4eda6cbd req-6778221b-162a-4aa5-9fe8-cd75c074c3f1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:42:23 np0005596060 nova_compute[247421]: 2026-01-26 18:42:23.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:23.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:42:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:24.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:42:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:42:25 np0005596060 nova_compute[247421]: 2026-01-26 18:42:25.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:25.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:42:27 np0005596060 nova_compute[247421]: 2026-01-26 18:42:27.161 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:27.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:27 np0005596060 nova_compute[247421]: 2026-01-26 18:42:27.925 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:28 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:28Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:37:f2:1e 10.100.0.6
Jan 26 13:42:28 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:28Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:f2:1e 10.100.0.6
Jan 26 13:42:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:28.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:28 np0005596060 nova_compute[247421]: 2026-01-26 18:42:28.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:28 np0005596060 nova_compute[247421]: 2026-01-26 18:42:28.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:28 np0005596060 nova_compute[247421]: 2026-01-26 18:42:28.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:42:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 94 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 560 KiB/s wr, 84 op/s
Jan 26 13:42:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:29.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:30.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:30 np0005596060 nova_compute[247421]: 2026-01-26 18:42:30.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:30 np0005596060 nova_compute[247421]: 2026-01-26 18:42:30.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:42:30 np0005596060 nova_compute[247421]: 2026-01-26 18:42:30.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:42:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 94 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 548 KiB/s wr, 51 op/s
Jan 26 13:42:31 np0005596060 nova_compute[247421]: 2026-01-26 18:42:31.463 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:42:31 np0005596060 nova_compute[247421]: 2026-01-26 18:42:31.464 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquired lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:42:31 np0005596060 nova_compute[247421]: 2026-01-26 18:42:31.464 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 26 13:42:31 np0005596060 nova_compute[247421]: 2026-01-26 18:42:31.464 247428 DEBUG nova.objects.instance [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:31 np0005596060 podman[303630]: 2026-01-26 18:42:31.79731151 +0000 UTC m=+0.060081384 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:42:31 np0005596060 podman[303631]: 2026-01-26 18:42:31.844376928 +0000 UTC m=+0.103240314 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 13:42:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:31.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:32.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:32 np0005596060 nova_compute[247421]: 2026-01-26 18:42:32.163 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 121 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Jan 26 13:42:32 np0005596060 nova_compute[247421]: 2026-01-26 18:42:32.926 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:33.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.107 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updating instance_info_cache with network_info: [{"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.130 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Releasing lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.130 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.131 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.131 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.131 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.156 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.156 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.156 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.156 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.157 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:42:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:34.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:42:34 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3705455237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.587 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.755 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.756 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:42:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 121 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.901 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.902 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4436MB free_disk=20.942890167236328GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.902 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:34 np0005596060 nova_compute[247421]: 2026-01-26 18:42:34.902 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.057 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.057 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.058 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.162 247428 INFO nova.compute.manager [None req-1f751520-8be1-4ff4-915b-9c259281847a ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Get console output#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.167 285734 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.170 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:42:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:42:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/934351964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.603 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.609 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.650 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.662 247428 DEBUG oslo_concurrency.lockutils [None req-66c6ffc8-ebab-47fc-9006-e6096328c71b ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.663 247428 DEBUG oslo_concurrency.lockutils [None req-66c6ffc8-ebab-47fc-9006-e6096328c71b ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.663 247428 DEBUG nova.compute.manager [None req-66c6ffc8-ebab-47fc-9006-e6096328c71b ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.666 247428 DEBUG nova.compute.manager [None req-66c6ffc8-ebab-47fc-9006-e6096328c71b ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.667 247428 DEBUG nova.objects.instance [None req-66c6ffc8-ebab-47fc-9006-e6096328c71b ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'flavor' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.693 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.693 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:35 np0005596060 nova_compute[247421]: 2026-01-26 18:42:35.700 247428 DEBUG nova.virt.libvirt.driver [None req-66c6ffc8-ebab-47fc-9006-e6096328c71b ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 26 13:42:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:35.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:36.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 121 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 13:42:37 np0005596060 nova_compute[247421]: 2026-01-26 18:42:37.218 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:37.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:37 np0005596060 nova_compute[247421]: 2026-01-26 18:42:37.929 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:38 np0005596060 kernel: tapa76d9016-42 (unregistering): left promiscuous mode
Jan 26 13:42:38 np0005596060 NetworkManager[48900]: <info>  [1769452958.0680] device (tapa76d9016-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:42:38 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:38Z|00183|binding|INFO|Releasing lport a76d9016-429e-486e-9688-7ceb79a8fbc5 from this chassis (sb_readonly=0)
Jan 26 13:42:38 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:38Z|00184|binding|INFO|Setting lport a76d9016-429e-486e-9688-7ceb79a8fbc5 down in Southbound
Jan 26 13:42:38 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:38Z|00185|binding|INFO|Removing iface tapa76d9016-42 ovn-installed in OVS
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.075 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.077 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.098 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:38 np0005596060 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Jan 26 13:42:38 np0005596060 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001c.scope: Consumed 13.288s CPU time.
Jan 26 13:42:38 np0005596060 systemd-machined[213879]: Machine qemu-15-instance-0000001c terminated.
Jan 26 13:42:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:38.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.232 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:f2:1e 10.100.0.6'], port_security=['fa:16:3e:37:f2:1e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9116262-f922-4c30-b270-06114ade6067', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd646169-00bc-4f72-a516-e4fe4f18150a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=a76d9016-429e-486e-9688-7ceb79a8fbc5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.233 159331 INFO neutron.agent.ovn.metadata.agent [-] Port a76d9016-429e-486e-9688-7ceb79a8fbc5 in datapath 74d216bf-0dc0-4b43-8bc3-cb7617fae49c unbound from our chassis#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.234 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74d216bf-0dc0-4b43-8bc3-cb7617fae49c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.235 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[4536ffdc-1f45-4067-bdeb-9bcfb80122b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.237 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c namespace which is not needed anymore#033[00m
Jan 26 13:42:38 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[303555]: [NOTICE]   (303559) : haproxy version is 2.8.14-c23fe91
Jan 26 13:42:38 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[303555]: [NOTICE]   (303559) : path to executable is /usr/sbin/haproxy
Jan 26 13:42:38 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[303555]: [WARNING]  (303559) : Exiting Master process...
Jan 26 13:42:38 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[303555]: [ALERT]    (303559) : Current worker (303561) exited with code 143 (Terminated)
Jan 26 13:42:38 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[303555]: [WARNING]  (303559) : All workers exited. Exiting... (0)
Jan 26 13:42:38 np0005596060 systemd[1]: libpod-440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77.scope: Deactivated successfully.
Jan 26 13:42:38 np0005596060 podman[303758]: 2026-01-26 18:42:38.392851688 +0000 UTC m=+0.050924405 container died 440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:42:38 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77-userdata-shm.mount: Deactivated successfully.
Jan 26 13:42:38 np0005596060 systemd[1]: var-lib-containers-storage-overlay-11311dd4114c8a332db69cb0232695a9eb058b7f81782e19acce17581724be54-merged.mount: Deactivated successfully.
Jan 26 13:42:38 np0005596060 podman[303758]: 2026-01-26 18:42:38.432594242 +0000 UTC m=+0.090666969 container cleanup 440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:42:38 np0005596060 systemd[1]: libpod-conmon-440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77.scope: Deactivated successfully.
Jan 26 13:42:38 np0005596060 podman[303788]: 2026-01-26 18:42:38.498273696 +0000 UTC m=+0.043916690 container remove 440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.504 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f66cea7c-198d-4523-ab14-90bc21225ff3]: (4, ('Mon Jan 26 06:42:38 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c (440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77)\n440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77\nMon Jan 26 06:42:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c (440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77)\n440d79ffd9c7ec2fd175ea1ae9b37edeac2f790bbe6eabede37810bceb4fac77\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.506 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[976b288c-4cd5-41a3-9918-6b2a6eb2cf78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.507 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74d216bf-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.551 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:38 np0005596060 kernel: tap74d216bf-00: left promiscuous mode
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.570 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.573 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[eb81e82d-4331-4105-89fe-f3a826cc569e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.595 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[12b55012-a02b-4867-8dfc-22f23bc38080]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.596 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0c9773-66c8-404f-b668-dc1c10e1b1fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.619 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[dc727bdb-0202-40e3-9aa8-570180231577]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 669892, 'reachable_time': 15459, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303804, 'error': None, 'target': 'ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.622 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:42:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:38.622 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[9e219f93-e9b7-43c9-a599-9deacbc8dfcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:38 np0005596060 systemd[1]: run-netns-ovnmeta\x2d74d216bf\x2d0dc0\x2d4b43\x2d8bc3\x2dcb7617fae49c.mount: Deactivated successfully.
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.719 247428 INFO nova.virt.libvirt.driver [None req-66c6ffc8-ebab-47fc-9006-e6096328c71b ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Instance shutdown successfully after 3 seconds.#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.724 247428 INFO nova.virt.libvirt.driver [-] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Instance destroyed successfully.#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.724 247428 DEBUG nova.objects.instance [None req-66c6ffc8-ebab-47fc-9006-e6096328c71b ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.754 247428 DEBUG nova.compute.manager [None req-66c6ffc8-ebab-47fc-9006-e6096328c71b ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.847 247428 DEBUG oslo_concurrency.lockutils [None req-66c6ffc8-ebab-47fc-9006-e6096328c71b ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 327 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.925 247428 DEBUG nova.compute.manager [req-81335f9b-9879-4c4c-bd31-8bc8082e2e75 req-41cba967-55a4-424b-80b0-c13ddde1f6d0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-vif-unplugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.926 247428 DEBUG oslo_concurrency.lockutils [req-81335f9b-9879-4c4c-bd31-8bc8082e2e75 req-41cba967-55a4-424b-80b0-c13ddde1f6d0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.926 247428 DEBUG oslo_concurrency.lockutils [req-81335f9b-9879-4c4c-bd31-8bc8082e2e75 req-41cba967-55a4-424b-80b0-c13ddde1f6d0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.926 247428 DEBUG oslo_concurrency.lockutils [req-81335f9b-9879-4c4c-bd31-8bc8082e2e75 req-41cba967-55a4-424b-80b0-c13ddde1f6d0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.927 247428 DEBUG nova.compute.manager [req-81335f9b-9879-4c4c-bd31-8bc8082e2e75 req-41cba967-55a4-424b-80b0-c13ddde1f6d0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] No waiting events found dispatching network-vif-unplugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:42:38 np0005596060 nova_compute[247421]: 2026-01-26 18:42:38.927 247428 WARNING nova.compute.manager [req-81335f9b-9879-4c4c-bd31-8bc8082e2e75 req-41cba967-55a4-424b-80b0-c13ddde1f6d0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received unexpected event network-vif-unplugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 for instance with vm_state stopped and task_state None.#033[00m
Jan 26 13:42:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:39.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:40.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 298 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Jan 26 13:42:41 np0005596060 nova_compute[247421]: 2026-01-26 18:42:41.119 247428 DEBUG nova.compute.manager [req-d8fa029a-8bd6-41d6-ae57-c51cb2bb341b req-3293d8e3-af42-441e-92ad-546712ec8da3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:42:41 np0005596060 nova_compute[247421]: 2026-01-26 18:42:41.120 247428 DEBUG oslo_concurrency.lockutils [req-d8fa029a-8bd6-41d6-ae57-c51cb2bb341b req-3293d8e3-af42-441e-92ad-546712ec8da3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:41 np0005596060 nova_compute[247421]: 2026-01-26 18:42:41.120 247428 DEBUG oslo_concurrency.lockutils [req-d8fa029a-8bd6-41d6-ae57-c51cb2bb341b req-3293d8e3-af42-441e-92ad-546712ec8da3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:41 np0005596060 nova_compute[247421]: 2026-01-26 18:42:41.120 247428 DEBUG oslo_concurrency.lockutils [req-d8fa029a-8bd6-41d6-ae57-c51cb2bb341b req-3293d8e3-af42-441e-92ad-546712ec8da3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:41 np0005596060 nova_compute[247421]: 2026-01-26 18:42:41.120 247428 DEBUG nova.compute.manager [req-d8fa029a-8bd6-41d6-ae57-c51cb2bb341b req-3293d8e3-af42-441e-92ad-546712ec8da3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] No waiting events found dispatching network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:42:41 np0005596060 nova_compute[247421]: 2026-01-26 18:42:41.120 247428 WARNING nova.compute.manager [req-d8fa029a-8bd6-41d6-ae57-c51cb2bb341b req-3293d8e3-af42-441e-92ad-546712ec8da3 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received unexpected event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 for instance with vm_state stopped and task_state None.#033[00m
Jan 26 13:42:41 np0005596060 nova_compute[247421]: 2026-01-26 18:42:41.688 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:41.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:42.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:42 np0005596060 nova_compute[247421]: 2026-01-26 18:42:42.189 247428 INFO nova.compute.manager [None req-92ad348f-14b8-432a-8204-d7a477899130 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Get console output#033[00m
Jan 26 13:42:42 np0005596060 nova_compute[247421]: 2026-01-26 18:42:42.221 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:42 np0005596060 nova_compute[247421]: 2026-01-26 18:42:42.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:42 np0005596060 nova_compute[247421]: 2026-01-26 18:42:42.763 247428 DEBUG nova.objects.instance [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'flavor' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:42 np0005596060 nova_compute[247421]: 2026-01-26 18:42:42.842 247428 DEBUG oslo_concurrency.lockutils [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:42:42 np0005596060 nova_compute[247421]: 2026-01-26 18:42:42.843 247428 DEBUG oslo_concurrency.lockutils [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquired lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:42:42 np0005596060 nova_compute[247421]: 2026-01-26 18:42:42.843 247428 DEBUG nova.network.neutron [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:42:42 np0005596060 nova_compute[247421]: 2026-01-26 18:42:42.843 247428 DEBUG nova.objects.instance [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'info_cache' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 298 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Jan 26 13:42:42 np0005596060 nova_compute[247421]: 2026-01-26 18:42:42.932 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:43.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:42:44
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta']
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:42:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:44.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:42:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.523 247428 DEBUG nova.network.neutron [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updating instance_info_cache with network_info: [{"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.561 247428 DEBUG oslo_concurrency.lockutils [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Releasing lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.594 247428 INFO nova.virt.libvirt.driver [-] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Instance destroyed successfully.#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.595 247428 DEBUG nova.objects.instance [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'numa_topology' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.819 247428 DEBUG nova.objects.instance [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'resources' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.831 247428 DEBUG nova.virt.libvirt.vif [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1870761727',display_name='tempest-TestNetworkAdvancedServerOps-server-1870761727',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1870761727',id=28,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhf8yCneBiri1NAWyBA0ya0pyyYSQJ1a9HF6KVwoI/Pve/OQeuQ4yJEGv4aAQjY92iHdUS2CnnT1UTHksLJvf4vYPD+3UTTgsTTJA6SiRoW+zUAoxAoX7Qe2Gdgl++cJQ==',key_name='tempest-TestNetworkAdvancedServerOps-1931875589',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:42:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-u8yf0pcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:42:38Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.831 247428 DEBUG nova.network.os_vif_util [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.832 247428 DEBUG nova.network.os_vif_util [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.833 247428 DEBUG os_vif [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.835 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.835 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa76d9016-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.836 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.840 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.842 247428 INFO os_vif [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42')#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.850 247428 DEBUG nova.virt.libvirt.driver [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Start _get_guest_xml network_info=[{"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.853 247428 WARNING nova.virt.libvirt.driver [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.861 247428 DEBUG nova.virt.libvirt.host [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.861 247428 DEBUG nova.virt.libvirt.host [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.864 247428 DEBUG nova.virt.libvirt.host [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.865 247428 DEBUG nova.virt.libvirt.host [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.866 247428 DEBUG nova.virt.libvirt.driver [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.866 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.866 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.866 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.866 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.867 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.867 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.867 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.867 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.867 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.867 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.868 247428 DEBUG nova.virt.hardware [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.868 247428 DEBUG nova.objects.instance [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:45 np0005596060 nova_compute[247421]: 2026-01-26 18:42:45.896 247428 DEBUG oslo_concurrency.processutils [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:42:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:45.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:46.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:42:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760872200' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.360 247428 DEBUG oslo_concurrency.processutils [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.402 247428 DEBUG oslo_concurrency.processutils [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:42:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:42:46 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2767658279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.808 247428 DEBUG oslo_concurrency.processutils [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.811 247428 DEBUG nova.virt.libvirt.vif [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1870761727',display_name='tempest-TestNetworkAdvancedServerOps-server-1870761727',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1870761727',id=28,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhf8yCneBiri1NAWyBA0ya0pyyYSQJ1a9HF6KVwoI/Pve/OQeuQ4yJEGv4aAQjY92iHdUS2CnnT1UTHksLJvf4vYPD+3UTTgsTTJA6SiRoW+zUAoxAoX7Qe2Gdgl++cJQ==',key_name='tempest-TestNetworkAdvancedServerOps-1931875589',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:42:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-u8yf0pcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:42:38Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.812 247428 DEBUG nova.network.os_vif_util [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.814 247428 DEBUG nova.network.os_vif_util [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.816 247428 DEBUG nova.objects.instance [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:42:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 1.8 KiB/s rd, 30 KiB/s wr, 4 op/s
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.904 247428 DEBUG nova.virt.libvirt.driver [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <uuid>2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa</uuid>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <name>instance-0000001c</name>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1870761727</nova:name>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:42:45</nova:creationTime>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <nova:user uuid="ffa1cd7ba9e543f78f2ef48c2a7a67a2">tempest-TestNetworkAdvancedServerOps-1357272614-project-member</nova:user>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <nova:project uuid="301bad5c2066428fa7f214024672bf92">tempest-TestNetworkAdvancedServerOps-1357272614</nova:project>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <nova:port uuid="a76d9016-429e-486e-9688-7ceb79a8fbc5">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <entry name="serial">2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa</entry>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <entry name="uuid">2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa</entry>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_disk.config">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:37:f2:1e"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <target dev="tapa76d9016-42"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa/console.log" append="off"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <input type="keyboard" bus="usb"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:42:46 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:42:46 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:42:46 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:42:46 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.906 247428 DEBUG nova.virt.libvirt.driver [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.907 247428 DEBUG nova.virt.libvirt.driver [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.909 247428 DEBUG nova.virt.libvirt.vif [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1870761727',display_name='tempest-TestNetworkAdvancedServerOps-server-1870761727',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1870761727',id=28,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhf8yCneBiri1NAWyBA0ya0pyyYSQJ1a9HF6KVwoI/Pve/OQeuQ4yJEGv4aAQjY92iHdUS2CnnT1UTHksLJvf4vYPD+3UTTgsTTJA6SiRoW+zUAoxAoX7Qe2Gdgl++cJQ==',key_name='tempest-TestNetworkAdvancedServerOps-1931875589',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:42:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-u8yf0pcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:42:38Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.909 247428 DEBUG nova.network.os_vif_util [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.910 247428 DEBUG nova.network.os_vif_util [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.911 247428 DEBUG os_vif [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.912 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.913 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.914 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.918 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.919 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa76d9016-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.919 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa76d9016-42, col_values=(('external_ids', {'iface-id': 'a76d9016-429e-486e-9688-7ceb79a8fbc5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:f2:1e', 'vm-uuid': '2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:46 np0005596060 NetworkManager[48900]: <info>  [1769452966.9239] manager: (tapa76d9016-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.926 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.932 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:46 np0005596060 nova_compute[247421]: 2026-01-26 18:42:46.936 247428 INFO os_vif [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42')#033[00m
Jan 26 13:42:47 np0005596060 kernel: tapa76d9016-42: entered promiscuous mode
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.020 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:47 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:47Z|00186|binding|INFO|Claiming lport a76d9016-429e-486e-9688-7ceb79a8fbc5 for this chassis.
Jan 26 13:42:47 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:47Z|00187|binding|INFO|a76d9016-429e-486e-9688-7ceb79a8fbc5: Claiming fa:16:3e:37:f2:1e 10.100.0.6
Jan 26 13:42:47 np0005596060 NetworkManager[48900]: <info>  [1769452967.0218] manager: (tapa76d9016-42): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Jan 26 13:42:47 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:47Z|00188|binding|INFO|Setting lport a76d9016-429e-486e-9688-7ceb79a8fbc5 ovn-installed in OVS
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.036 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.038 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:47 np0005596060 systemd-udevd[303936]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:42:47 np0005596060 systemd-machined[213879]: New machine qemu-16-instance-0000001c.
Jan 26 13:42:47 np0005596060 NetworkManager[48900]: <info>  [1769452967.0649] device (tapa76d9016-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:42:47 np0005596060 NetworkManager[48900]: <info>  [1769452967.0659] device (tapa76d9016-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:42:47 np0005596060 systemd[1]: Started Virtual Machine qemu-16-instance-0000001c.
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.100 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:f2:1e 10.100.0.6'], port_security=['fa:16:3e:37:f2:1e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a9116262-f922-4c30-b270-06114ade6067', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd646169-00bc-4f72-a516-e4fe4f18150a, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=a76d9016-429e-486e-9688-7ceb79a8fbc5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:42:47 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:47Z|00189|binding|INFO|Setting lport a76d9016-429e-486e-9688-7ceb79a8fbc5 up in Southbound
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.103 159331 INFO neutron.agent.ovn.metadata.agent [-] Port a76d9016-429e-486e-9688-7ceb79a8fbc5 in datapath 74d216bf-0dc0-4b43-8bc3-cb7617fae49c bound to our chassis#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.104 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74d216bf-0dc0-4b43-8bc3-cb7617fae49c#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.118 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f8668f1a-6a36-4363-ba05-4695f26b81b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.119 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74d216bf-01 in ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.122 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74d216bf-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.122 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[8c039b91-c77f-49dc-8618-2dfec456a66f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.123 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[9aee3614-ad74-45f1-9fab-9e38cfeaa5ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.138 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[65367a34-bf2e-42cb-8dbe-7c40b8923aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.154 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[44de2a39-6dee-438e-8402-e3eed87e669b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.184 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[47a298a8-015c-4544-8332-50b1c9190ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.190 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[dd844e99-de68-46f9-b188-5ff38c253661]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 NetworkManager[48900]: <info>  [1769452967.1918] manager: (tap74d216bf-00): new Veth device (/org/freedesktop/NetworkManager/Devices/100)
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.226 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[be412f5c-9c28-4f03-9f36-a2cc0f18b465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.231 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[7f18b231-37ca-4072-a907-8487b797be13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 NetworkManager[48900]: <info>  [1769452967.2537] device (tap74d216bf-00): carrier: link connected
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.260 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[b51171fc-e7ba-4491-b4bb-0381aca83301]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.286 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[635c81ba-cb69-4023-b974-e8d9fd026ea7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74d216bf-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:8f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673183, 'reachable_time': 18171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303969, 'error': None, 'target': 'ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.308 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d45e77e9-ba24-4656-9248-e32dce54b09c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:8fbb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 673183, 'tstamp': 673183}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303970, 'error': None, 'target': 'ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.332 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[9c00b570-4adb-4d52-841f-679ef9b96753]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74d216bf-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:8f:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673183, 'reachable_time': 18171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303971, 'error': None, 'target': 'ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.374 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[953afe51-79a0-448a-b357-47e5196fa0cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.444 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1a587f-6254-49dc-8553-d96e939459b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.446 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74d216bf-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.446 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.447 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74d216bf-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:47 np0005596060 kernel: tap74d216bf-00: entered promiscuous mode
Jan 26 13:42:47 np0005596060 NetworkManager[48900]: <info>  [1769452967.4507] manager: (tap74d216bf-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.450 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.451 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74d216bf-00, col_values=(('external_ids', {'iface-id': 'a263604e-c7db-4e16-8984-a7c390c70d2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.452 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:47 np0005596060 ovn_controller[148842]: 2026-01-26T18:42:47Z|00190|binding|INFO|Releasing lport a263604e-c7db-4e16-8984-a7c390c70d2d from this chassis (sb_readonly=0)
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.466 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.467 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74d216bf-0dc0-4b43-8bc3-cb7617fae49c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74d216bf-0dc0-4b43-8bc3-cb7617fae49c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.468 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c17514b3-fd88-4a76-a351-240b11806609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.468 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-74d216bf-0dc0-4b43-8bc3-cb7617fae49c
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/74d216bf-0dc0-4b43-8bc3-cb7617fae49c.pid.haproxy
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 74d216bf-0dc0-4b43-8bc3-cb7617fae49c
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:42:47 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:42:47.470 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'env', 'PROCESS_TAG=haproxy-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74d216bf-0dc0-4b43-8bc3-cb7617fae49c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.639 247428 DEBUG nova.virt.libvirt.host [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Removed pending event for 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.639 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452967.637892, 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.640 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.642 247428 DEBUG nova.compute.manager [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.648 247428 INFO nova.virt.libvirt.driver [-] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Instance rebooted successfully.#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.649 247428 DEBUG nova.compute.manager [None req-2f763c68-24e0-4dd7-b0c6-113b75f31cfa ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.678 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.682 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.732 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.733 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769452967.6382506, 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.733 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] VM Started (Lifecycle Event)#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.750 247428 DEBUG nova.compute.manager [req-e51735d6-c1be-491b-b494-e00326fd3311 req-f0464ed5-3dc1-47e6-bbaa-b15170fd8587 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.750 247428 DEBUG oslo_concurrency.lockutils [req-e51735d6-c1be-491b-b494-e00326fd3311 req-f0464ed5-3dc1-47e6-bbaa-b15170fd8587 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.750 247428 DEBUG oslo_concurrency.lockutils [req-e51735d6-c1be-491b-b494-e00326fd3311 req-f0464ed5-3dc1-47e6-bbaa-b15170fd8587 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.751 247428 DEBUG oslo_concurrency.lockutils [req-e51735d6-c1be-491b-b494-e00326fd3311 req-f0464ed5-3dc1-47e6-bbaa-b15170fd8587 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.751 247428 DEBUG nova.compute.manager [req-e51735d6-c1be-491b-b494-e00326fd3311 req-f0464ed5-3dc1-47e6-bbaa-b15170fd8587 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] No waiting events found dispatching network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.751 247428 WARNING nova.compute.manager [req-e51735d6-c1be-491b-b494-e00326fd3311 req-f0464ed5-3dc1-47e6-bbaa-b15170fd8587 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received unexpected event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 for instance with vm_state active and task_state None.#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.774 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.778 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:42:47 np0005596060 podman[304045]: 2026-01-26 18:42:47.870585537 +0000 UTC m=+0.053450338 container create a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 13:42:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:47.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:47 np0005596060 systemd[1]: Started libpod-conmon-a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7.scope.
Jan 26 13:42:47 np0005596060 podman[304045]: 2026-01-26 18:42:47.840790182 +0000 UTC m=+0.023654993 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:42:47 np0005596060 nova_compute[247421]: 2026-01-26 18:42:47.984 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:47 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:42:47 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7db931e2b191bfc91258ae3738f84fe0bc8d1694e29224dfee4dc36185e1b864/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:42:48 np0005596060 podman[304045]: 2026-01-26 18:42:48.011100293 +0000 UTC m=+0.193965114 container init a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 13:42:48 np0005596060 podman[304045]: 2026-01-26 18:42:48.017479823 +0000 UTC m=+0.200344624 container start a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 26 13:42:48 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[304059]: [NOTICE]   (304063) : New worker (304065) forked
Jan 26 13:42:48 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[304059]: [NOTICE]   (304063) : Loading success.
Jan 26 13:42:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:48.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 3.1 KiB/s rd, 29 KiB/s wr, 6 op/s
Jan 26 13:42:49 np0005596060 nova_compute[247421]: 2026-01-26 18:42:49.721 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:49 np0005596060 nova_compute[247421]: 2026-01-26 18:42:49.721 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 26 13:42:49 np0005596060 nova_compute[247421]: 2026-01-26 18:42:49.898 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 26 13:42:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:49.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:49 np0005596060 nova_compute[247421]: 2026-01-26 18:42:49.914 247428 DEBUG nova.compute.manager [req-7985f08d-042b-4e51-b016-a236cdad0b47 req-81f0c04b-8b74-47b0-a334-9e794e5d540f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:42:49 np0005596060 nova_compute[247421]: 2026-01-26 18:42:49.915 247428 DEBUG oslo_concurrency.lockutils [req-7985f08d-042b-4e51-b016-a236cdad0b47 req-81f0c04b-8b74-47b0-a334-9e794e5d540f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:49 np0005596060 nova_compute[247421]: 2026-01-26 18:42:49.915 247428 DEBUG oslo_concurrency.lockutils [req-7985f08d-042b-4e51-b016-a236cdad0b47 req-81f0c04b-8b74-47b0-a334-9e794e5d540f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:49 np0005596060 nova_compute[247421]: 2026-01-26 18:42:49.915 247428 DEBUG oslo_concurrency.lockutils [req-7985f08d-042b-4e51-b016-a236cdad0b47 req-81f0c04b-8b74-47b0-a334-9e794e5d540f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:49 np0005596060 nova_compute[247421]: 2026-01-26 18:42:49.915 247428 DEBUG nova.compute.manager [req-7985f08d-042b-4e51-b016-a236cdad0b47 req-81f0c04b-8b74-47b0-a334-9e794e5d540f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] No waiting events found dispatching network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:42:49 np0005596060 nova_compute[247421]: 2026-01-26 18:42:49.916 247428 WARNING nova.compute.manager [req-7985f08d-042b-4e51-b016-a236cdad0b47 req-81f0c04b-8b74-47b0-a334-9e794e5d540f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received unexpected event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 for instance with vm_state active and task_state None.#033[00m
Jan 26 13:42:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:50.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 0 B/s wr, 2 op/s
Jan 26 13:42:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:51.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:51 np0005596060 nova_compute[247421]: 2026-01-26 18:42:51.922 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:52.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 73 op/s
Jan 26 13:42:52 np0005596060 nova_compute[247421]: 2026-01-26 18:42:52.987 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:53.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:54.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:54 np0005596060 nova_compute[247421]: 2026-01-26 18:42:54.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:54 np0005596060 nova_compute[247421]: 2026-01-26 18:42:54.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 26 13:42:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 73 op/s
Jan 26 13:42:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:55.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:42:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:56.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:56 np0005596060 nova_compute[247421]: 2026-01-26 18:42:56.277 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:42:56 np0005596060 nova_compute[247421]: 2026-01-26 18:42:56.424 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Triggering sync for uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 26 13:42:56 np0005596060 nova_compute[247421]: 2026-01-26 18:42:56.424 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:42:56 np0005596060 nova_compute[247421]: 2026-01-26 18:42:56.425 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:42:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:42:56 np0005596060 nova_compute[247421]: 2026-01-26 18:42:56.834 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.409s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:42:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 0 B/s wr, 136 op/s
Jan 26 13:42:56 np0005596060 nova_compute[247421]: 2026-01-26 18:42:56.936 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 26 13:42:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:57.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 26 13:42:58 np0005596060 nova_compute[247421]: 2026-01-26 18:42:58.093 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:42:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:42:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:42:58.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:42:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 0 B/s wr, 289 op/s
Jan 26 13:42:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:42:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:42:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:42:59.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:00.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 0 B/s wr, 287 op/s
Jan 26 13:43:01 np0005596060 ovn_controller[148842]: 2026-01-26T18:43:01Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:37:f2:1e 10.100.0.6
Jan 26 13:43:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:01.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:01 np0005596060 nova_compute[247421]: 2026-01-26 18:43:01.940 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:02.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:02 np0005596060 podman[304084]: 2026-01-26 18:43:02.820757852 +0000 UTC m=+0.079852719 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Jan 26 13:43:02 np0005596060 podman[304083]: 2026-01-26 18:43:02.820736162 +0000 UTC m=+0.079753977 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 13:43:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 12 KiB/s wr, 332 op/s
Jan 26 13:43:03 np0005596060 nova_compute[247421]: 2026-01-26 18:43:03.096 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:03.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002172319932998325 of space, bias 1.0, pg target 0.6516959798994976 quantized to 32 (current 32)
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:43:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:04.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 121 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 658 KiB/s rd, 12 KiB/s wr, 261 op/s
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.299931) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452985300006, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1892, "num_deletes": 252, "total_data_size": 3357902, "memory_usage": 3398872, "flush_reason": "Manual Compaction"}
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452985317284, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 3276555, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43999, "largest_seqno": 45890, "table_properties": {"data_size": 3268000, "index_size": 5241, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17798, "raw_average_key_size": 20, "raw_value_size": 3250882, "raw_average_value_size": 3715, "num_data_blocks": 228, "num_entries": 875, "num_filter_entries": 875, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769452807, "oldest_key_time": 1769452807, "file_creation_time": 1769452985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 17380 microseconds, and 7004 cpu microseconds.
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.317316) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 3276555 bytes OK
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.317334) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.318439) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.318450) EVENT_LOG_v1 {"time_micros": 1769452985318447, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.318466) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 3350178, prev total WAL file size 3350178, number of live WAL files 2.
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.319296) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(3199KB)], [98(8979KB)]
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452985319397, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 12471532, "oldest_snapshot_seqno": -1}
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 7107 keys, 10505051 bytes, temperature: kUnknown
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452985391820, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10505051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10459768, "index_size": 26419, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17797, "raw_key_size": 183014, "raw_average_key_size": 25, "raw_value_size": 10334417, "raw_average_value_size": 1454, "num_data_blocks": 1049, "num_entries": 7107, "num_filter_entries": 7107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769452985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.392467) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10505051 bytes
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.394214) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.6 rd, 144.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.8 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7632, records dropped: 525 output_compression: NoCompression
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.394248) EVENT_LOG_v1 {"time_micros": 1769452985394231, "job": 58, "event": "compaction_finished", "compaction_time_micros": 72676, "compaction_time_cpu_micros": 34599, "output_level": 6, "num_output_files": 1, "total_output_size": 10505051, "num_input_records": 7632, "num_output_records": 7107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452985396138, "job": 58, "event": "table_file_deletion", "file_number": 100}
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452985400476, "job": 58, "event": "table_file_deletion", "file_number": 98}
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.319201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.400649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.400655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.400657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.400659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:05 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:05.400660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:05.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:06.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 122 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 658 KiB/s rd, 14 KiB/s wr, 261 op/s
Jan 26 13:43:06 np0005596060 nova_compute[247421]: 2026-01-26 18:43:06.944 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:07.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:08 np0005596060 ovn_controller[148842]: 2026-01-26T18:43:08Z|00191|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 26 13:43:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:43:08 np0005596060 nova_compute[247421]: 2026-01-26 18:43:08.147 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:43:08 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:08.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:08 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:08 np0005596060 nova_compute[247421]: 2026-01-26 18:43:08.629 247428 INFO nova.compute.manager [None req-be053c12-ffdd-4350-b63d-97b1242e9d97 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Get console output#033[00m
Jan 26 13:43:08 np0005596060 nova_compute[247421]: 2026-01-26 18:43:08.634 285734 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 26 13:43:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 122 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 621 KiB/s rd, 22 KiB/s wr, 199 op/s
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.212 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.213 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.243 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.452 247428 DEBUG nova.compute.manager [req-b5d208fb-bef9-4b64-b602-ad1f9327f414 req-375c8c73-a811-4fe5-857b-eb61511096a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-changed-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.452 247428 DEBUG nova.compute.manager [req-b5d208fb-bef9-4b64-b602-ad1f9327f414 req-375c8c73-a811-4fe5-857b-eb61511096a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Refreshing instance network info cache due to event network-changed-a76d9016-429e-486e-9688-7ceb79a8fbc5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.452 247428 DEBUG oslo_concurrency.lockutils [req-b5d208fb-bef9-4b64-b602-ad1f9327f414 req-375c8c73-a811-4fe5-857b-eb61511096a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.453 247428 DEBUG oslo_concurrency.lockutils [req-b5d208fb-bef9-4b64-b602-ad1f9327f414 req-375c8c73-a811-4fe5-857b-eb61511096a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.453 247428 DEBUG nova.network.neutron [req-b5d208fb-bef9-4b64-b602-ad1f9327f414 req-375c8c73-a811-4fe5-857b-eb61511096a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Refreshing network info cache for port a76d9016-429e-486e-9688-7ceb79a8fbc5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.585 247428 DEBUG oslo_concurrency.lockutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.586 247428 DEBUG oslo_concurrency.lockutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.587 247428 DEBUG oslo_concurrency.lockutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.587 247428 DEBUG oslo_concurrency.lockutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.587 247428 DEBUG oslo_concurrency.lockutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.588 247428 INFO nova.compute.manager [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Terminating instance#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.589 247428 DEBUG nova.compute.manager [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:43:09 np0005596060 kernel: tapa76d9016-42 (unregistering): left promiscuous mode
Jan 26 13:43:09 np0005596060 NetworkManager[48900]: <info>  [1769452989.6397] device (tapa76d9016-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:43:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:43:09Z|00192|binding|INFO|Releasing lport a76d9016-429e-486e-9688-7ceb79a8fbc5 from this chassis (sb_readonly=0)
Jan 26 13:43:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:43:09Z|00193|binding|INFO|Setting lport a76d9016-429e-486e-9688-7ceb79a8fbc5 down in Southbound
Jan 26 13:43:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:43:09Z|00194|binding|INFO|Removing iface tapa76d9016-42 ovn-installed in OVS
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.647 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.648 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.658 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:f2:1e 10.100.0.6'], port_security=['fa:16:3e:37:f2:1e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '301bad5c2066428fa7f214024672bf92', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a9116262-f922-4c30-b270-06114ade6067', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fd646169-00bc-4f72-a516-e4fe4f18150a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=a76d9016-429e-486e-9688-7ceb79a8fbc5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.659 159331 INFO neutron.agent.ovn.metadata.agent [-] Port a76d9016-429e-486e-9688-7ceb79a8fbc5 in datapath 74d216bf-0dc0-4b43-8bc3-cb7617fae49c unbound from our chassis#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.660 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74d216bf-0dc0-4b43-8bc3-cb7617fae49c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.663 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.662 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[652c49bb-6b2f-4152-934c-41632a1cdf8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.664 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c namespace which is not needed anymore#033[00m
Jan 26 13:43:09 np0005596060 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Jan 26 13:43:09 np0005596060 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000001c.scope: Consumed 13.383s CPU time.
Jan 26 13:43:09 np0005596060 systemd-machined[213879]: Machine qemu-16-instance-0000001c terminated.
Jan 26 13:43:09 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[304059]: [NOTICE]   (304063) : haproxy version is 2.8.14-c23fe91
Jan 26 13:43:09 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[304059]: [NOTICE]   (304063) : path to executable is /usr/sbin/haproxy
Jan 26 13:43:09 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[304059]: [WARNING]  (304063) : Exiting Master process...
Jan 26 13:43:09 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[304059]: [ALERT]    (304063) : Current worker (304065) exited with code 143 (Terminated)
Jan 26 13:43:09 np0005596060 neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c[304059]: [WARNING]  (304063) : All workers exited. Exiting... (0)
Jan 26 13:43:09 np0005596060 systemd[1]: libpod-a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7.scope: Deactivated successfully.
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.832 247428 INFO nova.virt.libvirt.driver [-] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Instance destroyed successfully.#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.833 247428 DEBUG nova.objects.instance [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lazy-loading 'resources' on Instance uuid 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:43:09 np0005596060 podman[304333]: 2026-01-26 18:43:09.834553836 +0000 UTC m=+0.062981887 container died a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:43:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7-userdata-shm.mount: Deactivated successfully.
Jan 26 13:43:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7db931e2b191bfc91258ae3738f84fe0bc8d1694e29224dfee4dc36185e1b864-merged.mount: Deactivated successfully.
Jan 26 13:43:09 np0005596060 podman[304333]: 2026-01-26 18:43:09.871005639 +0000 UTC m=+0.099433660 container cleanup a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 13:43:09 np0005596060 systemd[1]: libpod-conmon-a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7.scope: Deactivated successfully.
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.900 247428 DEBUG nova.virt.libvirt.vif [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:41:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1870761727',display_name='tempest-TestNetworkAdvancedServerOps-server-1870761727',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1870761727',id=28,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhf8yCneBiri1NAWyBA0ya0pyyYSQJ1a9HF6KVwoI/Pve/OQeuQ4yJEGv4aAQjY92iHdUS2CnnT1UTHksLJvf4vYPD+3UTTgsTTJA6SiRoW+zUAoxAoX7Qe2Gdgl++cJQ==',key_name='tempest-TestNetworkAdvancedServerOps-1931875589',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:42:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='301bad5c2066428fa7f214024672bf92',ramdisk_id='',reservation_id='r-u8yf0pcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1357272614',owner_user_name='tempest-TestNetworkAdvancedServerOps-1357272614-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:42:47Z,user_data=None,user_id='ffa1cd7ba9e543f78f2ef48c2a7a67a2',uuid=2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.901 247428 DEBUG nova.network.os_vif_util [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converting VIF {"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.901 247428 DEBUG nova.network.os_vif_util [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.902 247428 DEBUG os_vif [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.904 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.905 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa76d9016-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.906 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.909 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.911 247428 INFO os_vif [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:f2:1e,bridge_name='br-int',has_traffic_filtering=True,id=a76d9016-429e-486e-9688-7ceb79a8fbc5,network=Network(74d216bf-0dc0-4b43-8bc3-cb7617fae49c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa76d9016-42')#033[00m
Jan 26 13:43:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:09.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:09 np0005596060 podman[304372]: 2026-01-26 18:43:09.934804795 +0000 UTC m=+0.040977666 container remove a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.938 247428 DEBUG nova.compute.manager [req-1213d9f2-b021-4c39-9254-fc8a26d0aa16 req-786b89da-bdee-4e51-874b-854fd49c2eb1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-vif-unplugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.938 247428 DEBUG oslo_concurrency.lockutils [req-1213d9f2-b021-4c39-9254-fc8a26d0aa16 req-786b89da-bdee-4e51-874b-854fd49c2eb1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.939 247428 DEBUG oslo_concurrency.lockutils [req-1213d9f2-b021-4c39-9254-fc8a26d0aa16 req-786b89da-bdee-4e51-874b-854fd49c2eb1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.939 247428 DEBUG oslo_concurrency.lockutils [req-1213d9f2-b021-4c39-9254-fc8a26d0aa16 req-786b89da-bdee-4e51-874b-854fd49c2eb1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.939 247428 DEBUG nova.compute.manager [req-1213d9f2-b021-4c39-9254-fc8a26d0aa16 req-786b89da-bdee-4e51-874b-854fd49c2eb1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] No waiting events found dispatching network-vif-unplugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.939 247428 DEBUG nova.compute.manager [req-1213d9f2-b021-4c39-9254-fc8a26d0aa16 req-786b89da-bdee-4e51-874b-854fd49c2eb1 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-vif-unplugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.940 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd7dec4-d090-4d9a-a1f0-021027409691]: (4, ('Mon Jan 26 06:43:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c (a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7)\na5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7\nMon Jan 26 06:43:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c (a5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7)\na5230fa78ffc0b9be031e555dbdaf72145dc0eff4c7335749afc7fe1b49af7e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.941 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[7f15293d-3ab1-458d-a129-0b202322682d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.942 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74d216bf-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.944 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:09 np0005596060 kernel: tap74d216bf-00: left promiscuous mode
Jan 26 13:43:09 np0005596060 nova_compute[247421]: 2026-01-26 18:43:09.957 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.960 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[06127575-eaf6-431f-9ef5-f91f4a24c2ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.978 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[85d2c862-4949-4de6-b5ec-946f7a3d76b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.980 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf457af-9e82-4fdc-89c5-acea67a28e62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.994 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[9d469857-eca9-4f6c-8197-f20c48bfcb2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 673176, 'reachable_time': 30934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304405, 'error': None, 'target': 'ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:43:09 np0005596060 systemd[1]: run-netns-ovnmeta\x2d74d216bf\x2d0dc0\x2d4b43\x2d8bc3\x2dcb7617fae49c.mount: Deactivated successfully.
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.998 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74d216bf-0dc0-4b43-8bc3-cb7617fae49c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:43:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:09.998 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[2890755b-cd99-47e5-a8b8-463b87106d4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:43:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:10.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:10 np0005596060 nova_compute[247421]: 2026-01-26 18:43:10.348 247428 INFO nova.virt.libvirt.driver [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Deleting instance files /var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_del#033[00m
Jan 26 13:43:10 np0005596060 nova_compute[247421]: 2026-01-26 18:43:10.349 247428 INFO nova.virt.libvirt.driver [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Deletion of /var/lib/nova/instances/2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa_del complete#033[00m
Jan 26 13:43:10 np0005596060 nova_compute[247421]: 2026-01-26 18:43:10.477 247428 INFO nova.compute.manager [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:43:10 np0005596060 nova_compute[247421]: 2026-01-26 18:43:10.479 247428 DEBUG oslo.service.loopingcall [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:43:10 np0005596060 nova_compute[247421]: 2026-01-26 18:43:10.479 247428 DEBUG nova.compute.manager [-] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:43:10 np0005596060 nova_compute[247421]: 2026-01-26 18:43:10.479 247428 DEBUG nova.network.neutron [-] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:43:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 122 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 529 KiB/s rd, 22 KiB/s wr, 45 op/s
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 604c28cb-57d1-40ca-945a-db9d8e35180f does not exist
Jan 26 13:43:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 56f4e6d4-8a06-47ee-bded-b5cd74f12292 does not exist
Jan 26 13:43:11 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3bc7f911-ecd0-416f-bc9d-3775771f1bb4 does not exist
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:43:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:11.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:12 np0005596060 nova_compute[247421]: 2026-01-26 18:43:12.077 247428 DEBUG nova.compute.manager [req-e7d05704-04d2-43b0-bfb3-30269322bf3d req-75e7d5cb-3b71-4b94-a4a8-484dbf5fcb8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:43:12 np0005596060 nova_compute[247421]: 2026-01-26 18:43:12.080 247428 DEBUG oslo_concurrency.lockutils [req-e7d05704-04d2-43b0-bfb3-30269322bf3d req-75e7d5cb-3b71-4b94-a4a8-484dbf5fcb8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:43:12 np0005596060 nova_compute[247421]: 2026-01-26 18:43:12.080 247428 DEBUG oslo_concurrency.lockutils [req-e7d05704-04d2-43b0-bfb3-30269322bf3d req-75e7d5cb-3b71-4b94-a4a8-484dbf5fcb8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:43:12 np0005596060 nova_compute[247421]: 2026-01-26 18:43:12.080 247428 DEBUG oslo_concurrency.lockutils [req-e7d05704-04d2-43b0-bfb3-30269322bf3d req-75e7d5cb-3b71-4b94-a4a8-484dbf5fcb8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:43:12 np0005596060 nova_compute[247421]: 2026-01-26 18:43:12.081 247428 DEBUG nova.compute.manager [req-e7d05704-04d2-43b0-bfb3-30269322bf3d req-75e7d5cb-3b71-4b94-a4a8-484dbf5fcb8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] No waiting events found dispatching network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:43:12 np0005596060 nova_compute[247421]: 2026-01-26 18:43:12.081 247428 WARNING nova.compute.manager [req-e7d05704-04d2-43b0-bfb3-30269322bf3d req-75e7d5cb-3b71-4b94-a4a8-484dbf5fcb8d 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received unexpected event network-vif-plugged-a76d9016-429e-486e-9688-7ceb79a8fbc5 for instance with vm_state active and task_state deleting.#033[00m
Jan 26 13:43:12 np0005596060 podman[304546]: 2026-01-26 18:43:12.093401229 +0000 UTC m=+0.039871459 container create dde8804c7e99e0354125e0f89276e70f2fcf7adb9d65bf81b70a41e30c4b6f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:43:12 np0005596060 systemd[1]: Started libpod-conmon-dde8804c7e99e0354125e0f89276e70f2fcf7adb9d65bf81b70a41e30c4b6f8d.scope.
Jan 26 13:43:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:43:12 np0005596060 podman[304546]: 2026-01-26 18:43:12.077885671 +0000 UTC m=+0.024355921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:43:12 np0005596060 podman[304546]: 2026-01-26 18:43:12.17854523 +0000 UTC m=+0.125015480 container init dde8804c7e99e0354125e0f89276e70f2fcf7adb9d65bf81b70a41e30c4b6f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 26 13:43:12 np0005596060 podman[304546]: 2026-01-26 18:43:12.186728814 +0000 UTC m=+0.133199044 container start dde8804c7e99e0354125e0f89276e70f2fcf7adb9d65bf81b70a41e30c4b6f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:43:12 np0005596060 podman[304546]: 2026-01-26 18:43:12.190577981 +0000 UTC m=+0.137048241 container attach dde8804c7e99e0354125e0f89276e70f2fcf7adb9d65bf81b70a41e30c4b6f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:43:12 np0005596060 systemd[1]: libpod-dde8804c7e99e0354125e0f89276e70f2fcf7adb9d65bf81b70a41e30c4b6f8d.scope: Deactivated successfully.
Jan 26 13:43:12 np0005596060 pensive_antonelli[304562]: 167 167
Jan 26 13:43:12 np0005596060 conmon[304562]: conmon dde8804c7e99e0354125 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dde8804c7e99e0354125e0f89276e70f2fcf7adb9d65bf81b70a41e30c4b6f8d.scope/container/memory.events
Jan 26 13:43:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:12.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:12 np0005596060 podman[304569]: 2026-01-26 18:43:12.242843018 +0000 UTC m=+0.033295694 container died dde8804c7e99e0354125e0f89276e70f2fcf7adb9d65bf81b70a41e30c4b6f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:43:12 np0005596060 systemd[1]: var-lib-containers-storage-overlay-88a8102b0f1d348d61755fd910f50a7856327a8d8ec5a020c76b23c284832815-merged.mount: Deactivated successfully.
Jan 26 13:43:12 np0005596060 podman[304569]: 2026-01-26 18:43:12.275579848 +0000 UTC m=+0.066032524 container remove dde8804c7e99e0354125e0f89276e70f2fcf7adb9d65bf81b70a41e30c4b6f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:43:12 np0005596060 systemd[1]: libpod-conmon-dde8804c7e99e0354125e0f89276e70f2fcf7adb9d65bf81b70a41e30c4b6f8d.scope: Deactivated successfully.
Jan 26 13:43:12 np0005596060 podman[304592]: 2026-01-26 18:43:12.438247578 +0000 UTC m=+0.039547910 container create 572507e928921f7bc89420b1e53ffcd009b22d56d3fcf7055c4eaf697890d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:43:12 np0005596060 systemd[1]: Started libpod-conmon-572507e928921f7bc89420b1e53ffcd009b22d56d3fcf7055c4eaf697890d5f2.scope.
Jan 26 13:43:12 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:43:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65c03885e6e24c3c34373c2c459fd2762ca5af40fd1d5917fbb0b2be576a4309/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65c03885e6e24c3c34373c2c459fd2762ca5af40fd1d5917fbb0b2be576a4309/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65c03885e6e24c3c34373c2c459fd2762ca5af40fd1d5917fbb0b2be576a4309/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65c03885e6e24c3c34373c2c459fd2762ca5af40fd1d5917fbb0b2be576a4309/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:12 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65c03885e6e24c3c34373c2c459fd2762ca5af40fd1d5917fbb0b2be576a4309/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:12 np0005596060 nova_compute[247421]: 2026-01-26 18:43:12.506 247428 DEBUG nova.network.neutron [req-b5d208fb-bef9-4b64-b602-ad1f9327f414 req-375c8c73-a811-4fe5-857b-eb61511096a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updated VIF entry in instance network info cache for port a76d9016-429e-486e-9688-7ceb79a8fbc5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:43:12 np0005596060 nova_compute[247421]: 2026-01-26 18:43:12.507 247428 DEBUG nova.network.neutron [req-b5d208fb-bef9-4b64-b602-ad1f9327f414 req-375c8c73-a811-4fe5-857b-eb61511096a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updating instance_info_cache with network_info: [{"id": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "address": "fa:16:3e:37:f2:1e", "network": {"id": "74d216bf-0dc0-4b43-8bc3-cb7617fae49c", "bridge": "br-int", "label": "tempest-network-smoke--1002300424", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "301bad5c2066428fa7f214024672bf92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa76d9016-42", "ovs_interfaceid": "a76d9016-429e-486e-9688-7ceb79a8fbc5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:43:12 np0005596060 podman[304592]: 2026-01-26 18:43:12.514946107 +0000 UTC m=+0.116246439 container init 572507e928921f7bc89420b1e53ffcd009b22d56d3fcf7055c4eaf697890d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:43:12 np0005596060 podman[304592]: 2026-01-26 18:43:12.422223177 +0000 UTC m=+0.023523539 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:43:12 np0005596060 podman[304592]: 2026-01-26 18:43:12.521600004 +0000 UTC m=+0.122900336 container start 572507e928921f7bc89420b1e53ffcd009b22d56d3fcf7055c4eaf697890d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:43:12 np0005596060 podman[304592]: 2026-01-26 18:43:12.52504469 +0000 UTC m=+0.126345052 container attach 572507e928921f7bc89420b1e53ffcd009b22d56d3fcf7055c4eaf697890d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 13:43:12 np0005596060 nova_compute[247421]: 2026-01-26 18:43:12.527 247428 DEBUG oslo_concurrency.lockutils [req-b5d208fb-bef9-4b64-b602-ad1f9327f414 req-375c8c73-a811-4fe5-857b-eb61511096a9 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:43:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:43:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:43:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail; 550 KiB/s rd, 24 KiB/s wr, 74 op/s
Jan 26 13:43:13 np0005596060 nova_compute[247421]: 2026-01-26 18:43:13.149 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:13 np0005596060 vigorous_easley[304608]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:43:13 np0005596060 vigorous_easley[304608]: --> relative data size: 1.0
Jan 26 13:43:13 np0005596060 vigorous_easley[304608]: --> All data devices are unavailable
Jan 26 13:43:13 np0005596060 systemd[1]: libpod-572507e928921f7bc89420b1e53ffcd009b22d56d3fcf7055c4eaf697890d5f2.scope: Deactivated successfully.
Jan 26 13:43:13 np0005596060 podman[304592]: 2026-01-26 18:43:13.316946826 +0000 UTC m=+0.918247158 container died 572507e928921f7bc89420b1e53ffcd009b22d56d3fcf7055c4eaf697890d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 13:43:13 np0005596060 systemd[1]: var-lib-containers-storage-overlay-65c03885e6e24c3c34373c2c459fd2762ca5af40fd1d5917fbb0b2be576a4309-merged.mount: Deactivated successfully.
Jan 26 13:43:13 np0005596060 podman[304592]: 2026-01-26 18:43:13.372965688 +0000 UTC m=+0.974266020 container remove 572507e928921f7bc89420b1e53ffcd009b22d56d3fcf7055c4eaf697890d5f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_easley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:43:13 np0005596060 systemd[1]: libpod-conmon-572507e928921f7bc89420b1e53ffcd009b22d56d3fcf7055c4eaf697890d5f2.scope: Deactivated successfully.
Jan 26 13:43:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:13.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:13 np0005596060 podman[304778]: 2026-01-26 18:43:13.934058187 +0000 UTC m=+0.036665759 container create 3be8c8ce86abc86eb988fce5c43300b59733145f5eb1e27143b4b035d934c19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 13:43:13 np0005596060 systemd[1]: Started libpod-conmon-3be8c8ce86abc86eb988fce5c43300b59733145f5eb1e27143b4b035d934c19c.scope.
Jan 26 13:43:13 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:43:14 np0005596060 podman[304778]: 2026-01-26 18:43:13.999989007 +0000 UTC m=+0.102596609 container init 3be8c8ce86abc86eb988fce5c43300b59733145f5eb1e27143b4b035d934c19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:43:14 np0005596060 podman[304778]: 2026-01-26 18:43:14.006891709 +0000 UTC m=+0.109499281 container start 3be8c8ce86abc86eb988fce5c43300b59733145f5eb1e27143b4b035d934c19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:43:14 np0005596060 podman[304778]: 2026-01-26 18:43:14.010536931 +0000 UTC m=+0.113144503 container attach 3be8c8ce86abc86eb988fce5c43300b59733145f5eb1e27143b4b035d934c19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 26 13:43:14 np0005596060 heuristic_boyd[304794]: 167 167
Jan 26 13:43:14 np0005596060 systemd[1]: libpod-3be8c8ce86abc86eb988fce5c43300b59733145f5eb1e27143b4b035d934c19c.scope: Deactivated successfully.
Jan 26 13:43:14 np0005596060 podman[304778]: 2026-01-26 18:43:14.013206317 +0000 UTC m=+0.115813919 container died 3be8c8ce86abc86eb988fce5c43300b59733145f5eb1e27143b4b035d934c19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:43:14 np0005596060 podman[304778]: 2026-01-26 18:43:13.917124413 +0000 UTC m=+0.019732005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:43:14 np0005596060 systemd[1]: var-lib-containers-storage-overlay-040f97c47a17e073b99780f1686f95e42f158052112a180a3c7f21657453b090-merged.mount: Deactivated successfully.
Jan 26 13:43:14 np0005596060 podman[304778]: 2026-01-26 18:43:14.045101866 +0000 UTC m=+0.147709438 container remove 3be8c8ce86abc86eb988fce5c43300b59733145f5eb1e27143b4b035d934c19c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:43:14 np0005596060 systemd[1]: libpod-conmon-3be8c8ce86abc86eb988fce5c43300b59733145f5eb1e27143b4b035d934c19c.scope: Deactivated successfully.
Jan 26 13:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:43:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:43:14 np0005596060 podman[304818]: 2026-01-26 18:43:14.212040653 +0000 UTC m=+0.044777822 container create e094605472af3dfc10b3b5e77ac6289f1de663b2551625a88fba9d9d53ede50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:43:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:14.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:14 np0005596060 systemd[1]: Started libpod-conmon-e094605472af3dfc10b3b5e77ac6289f1de663b2551625a88fba9d9d53ede50a.scope.
Jan 26 13:43:14 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:43:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7745fa15590e2c00e3d565c7bc98e77a71d9fad85c1061ec882d8ef66fcbc349/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7745fa15590e2c00e3d565c7bc98e77a71d9fad85c1061ec882d8ef66fcbc349/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7745fa15590e2c00e3d565c7bc98e77a71d9fad85c1061ec882d8ef66fcbc349/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:14 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7745fa15590e2c00e3d565c7bc98e77a71d9fad85c1061ec882d8ef66fcbc349/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:14 np0005596060 podman[304818]: 2026-01-26 18:43:14.192594446 +0000 UTC m=+0.025331635 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:43:14 np0005596060 podman[304818]: 2026-01-26 18:43:14.295479931 +0000 UTC m=+0.128217130 container init e094605472af3dfc10b3b5e77ac6289f1de663b2551625a88fba9d9d53ede50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:43:14 np0005596060 podman[304818]: 2026-01-26 18:43:14.302470936 +0000 UTC m=+0.135208105 container start e094605472af3dfc10b3b5e77ac6289f1de663b2551625a88fba9d9d53ede50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:43:14 np0005596060 podman[304818]: 2026-01-26 18:43:14.305392839 +0000 UTC m=+0.138130008 container attach e094605472af3dfc10b3b5e77ac6289f1de663b2551625a88fba9d9d53ede50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 26 13:43:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:14.769 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:43:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:14.772 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:43:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:14.772 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:43:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 26 13:43:14 np0005596060 nova_compute[247421]: 2026-01-26 18:43:14.907 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]: {
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:    "1": [
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:        {
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "devices": [
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "/dev/loop3"
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            ],
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "lv_name": "ceph_lv0",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "lv_size": "7511998464",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "name": "ceph_lv0",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "tags": {
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.cluster_name": "ceph",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.crush_device_class": "",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.encrypted": "0",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.osd_id": "1",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.type": "block",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:                "ceph.vdo": "0"
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            },
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "type": "block",
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:            "vg_name": "ceph_vg0"
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:        }
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]:    ]
Jan 26 13:43:15 np0005596060 frosty_mcclintock[304835]: }
Jan 26 13:43:15 np0005596060 systemd[1]: libpod-e094605472af3dfc10b3b5e77ac6289f1de663b2551625a88fba9d9d53ede50a.scope: Deactivated successfully.
Jan 26 13:43:15 np0005596060 podman[304818]: 2026-01-26 18:43:15.04796056 +0000 UTC m=+0.880697719 container died e094605472af3dfc10b3b5e77ac6289f1de663b2551625a88fba9d9d53ede50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:43:15 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7745fa15590e2c00e3d565c7bc98e77a71d9fad85c1061ec882d8ef66fcbc349-merged.mount: Deactivated successfully.
Jan 26 13:43:15 np0005596060 podman[304818]: 2026-01-26 18:43:15.102282869 +0000 UTC m=+0.935020038 container remove e094605472af3dfc10b3b5e77ac6289f1de663b2551625a88fba9d9d53ede50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:43:15 np0005596060 systemd[1]: libpod-conmon-e094605472af3dfc10b3b5e77ac6289f1de663b2551625a88fba9d9d53ede50a.scope: Deactivated successfully.
Jan 26 13:43:15 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:43:15.216 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:43:15 np0005596060 podman[304993]: 2026-01-26 18:43:15.681016321 +0000 UTC m=+0.037171741 container create 81da6684c209c4cd5fb947a32f14d2175c15b45eaa4ca75b07776a51c5bf1658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:43:15 np0005596060 systemd[1]: Started libpod-conmon-81da6684c209c4cd5fb947a32f14d2175c15b45eaa4ca75b07776a51c5bf1658.scope.
Jan 26 13:43:15 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:43:15 np0005596060 podman[304993]: 2026-01-26 18:43:15.665359909 +0000 UTC m=+0.021515349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:43:15 np0005596060 podman[304993]: 2026-01-26 18:43:15.767658559 +0000 UTC m=+0.123814029 container init 81da6684c209c4cd5fb947a32f14d2175c15b45eaa4ca75b07776a51c5bf1658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kilby, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:43:15 np0005596060 podman[304993]: 2026-01-26 18:43:15.775402363 +0000 UTC m=+0.131557783 container start 81da6684c209c4cd5fb947a32f14d2175c15b45eaa4ca75b07776a51c5bf1658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kilby, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:43:15 np0005596060 podman[304993]: 2026-01-26 18:43:15.77887094 +0000 UTC m=+0.135026410 container attach 81da6684c209c4cd5fb947a32f14d2175c15b45eaa4ca75b07776a51c5bf1658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kilby, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:43:15 np0005596060 inspiring_kilby[305009]: 167 167
Jan 26 13:43:15 np0005596060 systemd[1]: libpod-81da6684c209c4cd5fb947a32f14d2175c15b45eaa4ca75b07776a51c5bf1658.scope: Deactivated successfully.
Jan 26 13:43:15 np0005596060 podman[304993]: 2026-01-26 18:43:15.780897601 +0000 UTC m=+0.137053021 container died 81da6684c209c4cd5fb947a32f14d2175c15b45eaa4ca75b07776a51c5bf1658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:43:15 np0005596060 systemd[1]: var-lib-containers-storage-overlay-cbc771893c0c9754233d4af591329edc949a3306c4e31a565f1c200bdf2cb884-merged.mount: Deactivated successfully.
Jan 26 13:43:15 np0005596060 podman[304993]: 2026-01-26 18:43:15.816875171 +0000 UTC m=+0.173030591 container remove 81da6684c209c4cd5fb947a32f14d2175c15b45eaa4ca75b07776a51c5bf1658 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kilby, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:43:15 np0005596060 systemd[1]: libpod-conmon-81da6684c209c4cd5fb947a32f14d2175c15b45eaa4ca75b07776a51c5bf1658.scope: Deactivated successfully.
Jan 26 13:43:15 np0005596060 nova_compute[247421]: 2026-01-26 18:43:15.890 247428 DEBUG nova.network.neutron [-] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:43:15 np0005596060 nova_compute[247421]: 2026-01-26 18:43:15.909 247428 INFO nova.compute.manager [-] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Took 5.43 seconds to deallocate network for instance.#033[00m
Jan 26 13:43:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:15.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:15 np0005596060 nova_compute[247421]: 2026-01-26 18:43:15.947 247428 DEBUG oslo_concurrency.lockutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:43:15 np0005596060 nova_compute[247421]: 2026-01-26 18:43:15.948 247428 DEBUG oslo_concurrency.lockutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:43:15 np0005596060 nova_compute[247421]: 2026-01-26 18:43:15.960 247428 DEBUG nova.compute.manager [req-b597ab1f-e119-4e53-a621-549130170b83 req-00031c02-78dc-449d-8165-d81ced650594 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Received event network-vif-deleted-a76d9016-429e-486e-9688-7ceb79a8fbc5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:43:15 np0005596060 podman[305032]: 2026-01-26 18:43:15.987391688 +0000 UTC m=+0.047690515 container create ee2c1186811958ceb1ee25b8a34b466e37eaacb952f736248932f412154880d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 13:43:16 np0005596060 systemd[1]: Started libpod-conmon-ee2c1186811958ceb1ee25b8a34b466e37eaacb952f736248932f412154880d4.scope.
Jan 26 13:43:16 np0005596060 nova_compute[247421]: 2026-01-26 18:43:16.019 247428 DEBUG oslo_concurrency.processutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:43:16 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:43:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f9580e3829c87947e5bdc370c56768683afdf88881da0b254c0a70fb667bc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f9580e3829c87947e5bdc370c56768683afdf88881da0b254c0a70fb667bc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f9580e3829c87947e5bdc370c56768683afdf88881da0b254c0a70fb667bc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:16 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f9580e3829c87947e5bdc370c56768683afdf88881da0b254c0a70fb667bc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:43:16 np0005596060 podman[305032]: 2026-01-26 18:43:16.0534226 +0000 UTC m=+0.113721447 container init ee2c1186811958ceb1ee25b8a34b466e37eaacb952f736248932f412154880d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:43:16 np0005596060 podman[305032]: 2026-01-26 18:43:16.058754713 +0000 UTC m=+0.119053540 container start ee2c1186811958ceb1ee25b8a34b466e37eaacb952f736248932f412154880d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:43:16 np0005596060 podman[305032]: 2026-01-26 18:43:16.061691907 +0000 UTC m=+0.121990734 container attach ee2c1186811958ceb1ee25b8a34b466e37eaacb952f736248932f412154880d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:43:16 np0005596060 podman[305032]: 2026-01-26 18:43:15.968454304 +0000 UTC m=+0.028753151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:43:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:16.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1565855405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:43:16 np0005596060 nova_compute[247421]: 2026-01-26 18:43:16.451 247428 DEBUG oslo_concurrency.processutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:43:16 np0005596060 nova_compute[247421]: 2026-01-26 18:43:16.458 247428 DEBUG nova.compute.provider_tree [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:43:16 np0005596060 nova_compute[247421]: 2026-01-26 18:43:16.473 247428 DEBUG nova.scheduler.client.report [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:43:16 np0005596060 nova_compute[247421]: 2026-01-26 18:43:16.491 247428 DEBUG oslo_concurrency.lockutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:43:16 np0005596060 nova_compute[247421]: 2026-01-26 18:43:16.513 247428 INFO nova.scheduler.client.report [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Deleted allocations for instance 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa#033[00m
Jan 26 13:43:16 np0005596060 nova_compute[247421]: 2026-01-26 18:43:16.566 247428 DEBUG oslo_concurrency.lockutils [None req-2a09d153-ff16-481d-ab71-9495b3581797 ffa1cd7ba9e543f78f2ef48c2a7a67a2 301bad5c2066428fa7f214024672bf92 - - default default] Lock "2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.729966) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452996729999, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 386, "num_deletes": 257, "total_data_size": 282243, "memory_usage": 289744, "flush_reason": "Manual Compaction"}
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452996734235, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 269128, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45891, "largest_seqno": 46276, "table_properties": {"data_size": 266758, "index_size": 470, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5817, "raw_average_key_size": 18, "raw_value_size": 261945, "raw_average_value_size": 823, "num_data_blocks": 20, "num_entries": 318, "num_filter_entries": 318, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769452986, "oldest_key_time": 1769452986, "file_creation_time": 1769452996, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 4311 microseconds, and 1768 cpu microseconds.
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.734273) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 269128 bytes OK
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.734295) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.735962) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.735975) EVENT_LOG_v1 {"time_micros": 1769452996735971, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.735990) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 279716, prev total WAL file size 279716, number of live WAL files 2.
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.736518) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353038' seq:72057594037927935, type:22 .. '6C6F676D0031373631' seq:0, type:0; will stop at (end)
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(262KB)], [101(10MB)]
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452996736589, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10774179, "oldest_snapshot_seqno": -1}
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6899 keys, 10648824 bytes, temperature: kUnknown
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452996800008, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 10648824, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10604185, "index_size": 26287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17285, "raw_key_size": 179642, "raw_average_key_size": 26, "raw_value_size": 10481720, "raw_average_value_size": 1519, "num_data_blocks": 1041, "num_entries": 6899, "num_filter_entries": 6899, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769452996, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.800448) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 10648824 bytes
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.803647) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.3 rd, 167.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.0 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(79.6) write-amplify(39.6) OK, records in: 7425, records dropped: 526 output_compression: NoCompression
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.803666) EVENT_LOG_v1 {"time_micros": 1769452996803657, "job": 60, "event": "compaction_finished", "compaction_time_micros": 63651, "compaction_time_cpu_micros": 24053, "output_level": 6, "num_output_files": 1, "total_output_size": 10648824, "num_input_records": 7425, "num_output_records": 6899, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452996804124, "job": 60, "event": "table_file_deletion", "file_number": 103}
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769452996806872, "job": 60, "event": "table_file_deletion", "file_number": 101}
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.736429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.807018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.807023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.807024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.807026) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:43:16.807027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:43:16 np0005596060 awesome_elgamal[305049]: {
Jan 26 13:43:16 np0005596060 awesome_elgamal[305049]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:43:16 np0005596060 awesome_elgamal[305049]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:43:16 np0005596060 awesome_elgamal[305049]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:43:16 np0005596060 awesome_elgamal[305049]:        "osd_id": 1,
Jan 26 13:43:16 np0005596060 awesome_elgamal[305049]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:43:16 np0005596060 awesome_elgamal[305049]:        "type": "bluestore"
Jan 26 13:43:16 np0005596060 awesome_elgamal[305049]:    }
Jan 26 13:43:16 np0005596060 awesome_elgamal[305049]: }
Jan 26 13:43:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 29 op/s
Jan 26 13:43:16 np0005596060 systemd[1]: libpod-ee2c1186811958ceb1ee25b8a34b466e37eaacb952f736248932f412154880d4.scope: Deactivated successfully.
Jan 26 13:43:16 np0005596060 podman[305032]: 2026-01-26 18:43:16.914087976 +0000 UTC m=+0.974386803 container died ee2c1186811958ceb1ee25b8a34b466e37eaacb952f736248932f412154880d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:43:16 np0005596060 systemd[1]: var-lib-containers-storage-overlay-55f9580e3829c87947e5bdc370c56768683afdf88881da0b254c0a70fb667bc1-merged.mount: Deactivated successfully.
Jan 26 13:43:16 np0005596060 podman[305032]: 2026-01-26 18:43:16.963237676 +0000 UTC m=+1.023536503 container remove ee2c1186811958ceb1ee25b8a34b466e37eaacb952f736248932f412154880d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:43:16 np0005596060 systemd[1]: libpod-conmon-ee2c1186811958ceb1ee25b8a34b466e37eaacb952f736248932f412154880d4.scope: Deactivated successfully.
Jan 26 13:43:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:43:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:43:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:17 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b3a123f7-e14d-4dff-8d85-c87a8261f0b3 does not exist
Jan 26 13:43:17 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 240649ac-f56d-4fc4-b941-f0fb25580740 does not exist
Jan 26 13:43:17 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 07ecbee2-bb66-4aea-98bd-2eedfc772311 does not exist
Jan 26 13:43:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:17.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:43:18 np0005596060 nova_compute[247421]: 2026-01-26 18:43:18.151 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:18.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 9.9 KiB/s wr, 28 op/s
Jan 26 13:43:19 np0005596060 nova_compute[247421]: 2026-01-26 18:43:19.911 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:19.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:20.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:20 np0005596060 nova_compute[247421]: 2026-01-26 18:43:20.864 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 13:43:20 np0005596060 nova_compute[247421]: 2026-01-26 18:43:20.994 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:21 np0005596060 nova_compute[247421]: 2026-01-26 18:43:21.798 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:43:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:21.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:22.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 26 13:43:23 np0005596060 nova_compute[247421]: 2026-01-26 18:43:23.153 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:23 np0005596060 nova_compute[247421]: 2026-01-26 18:43:23.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:43:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:23.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:24.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:24 np0005596060 nova_compute[247421]: 2026-01-26 18:43:24.832 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769452989.830993, 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:43:24 np0005596060 nova_compute[247421]: 2026-01-26 18:43:24.832 247428 INFO nova.compute.manager [-] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:43:24 np0005596060 nova_compute[247421]: 2026-01-26 18:43:24.855 247428 DEBUG nova.compute.manager [None req-d2e57ad4-6bf6-449b-ad97-d1f71159bba7 - - - - - -] [instance: 2c5a5db9-1c98-4eb8-8dff-3db63d34f8aa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:43:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:24 np0005596060 nova_compute[247421]: 2026-01-26 18:43:24.914 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:25.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:26.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:27 np0005596060 nova_compute[247421]: 2026-01-26 18:43:27.647 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:43:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:27.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:28 np0005596060 nova_compute[247421]: 2026-01-26 18:43:28.197 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:28.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:28 np0005596060 nova_compute[247421]: 2026-01-26 18:43:28.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:43:28 np0005596060 nova_compute[247421]: 2026-01-26 18:43:28.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:43:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:29 np0005596060 nova_compute[247421]: 2026-01-26 18:43:29.918 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:29.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:30.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:30 np0005596060 nova_compute[247421]: 2026-01-26 18:43:30.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:43:30 np0005596060 nova_compute[247421]: 2026-01-26 18:43:30.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:43:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:31 np0005596060 nova_compute[247421]: 2026-01-26 18:43:31.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:43:31 np0005596060 nova_compute[247421]: 2026-01-26 18:43:31.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:43:31 np0005596060 nova_compute[247421]: 2026-01-26 18:43:31.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:43:31 np0005596060 nova_compute[247421]: 2026-01-26 18:43:31.678 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:43:31 np0005596060 nova_compute[247421]: 2026-01-26 18:43:31.679 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:43:31 np0005596060 nova_compute[247421]: 2026-01-26 18:43:31.701 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:43:31 np0005596060 nova_compute[247421]: 2026-01-26 18:43:31.701 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:43:31 np0005596060 nova_compute[247421]: 2026-01-26 18:43:31.702 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:43:31 np0005596060 nova_compute[247421]: 2026-01-26 18:43:31.702 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:43:31 np0005596060 nova_compute[247421]: 2026-01-26 18:43:31.702 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:43:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:31.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:43:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827905210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.139 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:43:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:32.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.295 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.296 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4601MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.297 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.297 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.356 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.356 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.426 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing inventories for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.485 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating ProviderTree inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.486 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.499 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing aggregate associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.524 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing trait associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.541 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:43:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:43:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1424263981' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.971 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:43:32 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.976 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:43:33 np0005596060 nova_compute[247421]: 2026-01-26 18:43:32.999 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:43:33 np0005596060 nova_compute[247421]: 2026-01-26 18:43:33.021 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:43:33 np0005596060 nova_compute[247421]: 2026-01-26 18:43:33.021 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:43:33 np0005596060 nova_compute[247421]: 2026-01-26 18:43:33.199 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:33 np0005596060 podman[305261]: 2026-01-26 18:43:33.807301491 +0000 UTC m=+0.059452798 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Jan 26 13:43:33 np0005596060 podman[305262]: 2026-01-26 18:43:33.838165844 +0000 UTC m=+0.089675405 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:43:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:33.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:34.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:34 np0005596060 nova_compute[247421]: 2026-01-26 18:43:34.920 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:34 np0005596060 nova_compute[247421]: 2026-01-26 18:43:34.992 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:43:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:35.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:36.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:37.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:38 np0005596060 nova_compute[247421]: 2026-01-26 18:43:38.201 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:38.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:39 np0005596060 nova_compute[247421]: 2026-01-26 18:43:39.924 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:39.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:41.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:42.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:43 np0005596060 nova_compute[247421]: 2026-01-26 18:43:43.204 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:43.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:43:44
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'backups', 'images', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'vms', '.rgw.root']
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:43:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:44.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:44 np0005596060 nova_compute[247421]: 2026-01-26 18:43:44.927 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:43:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:43:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:45.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:46.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:47.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:48 np0005596060 nova_compute[247421]: 2026-01-26 18:43:48.205 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:48.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:49 np0005596060 nova_compute[247421]: 2026-01-26 18:43:49.931 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:49.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:50.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:51.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:52.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:53 np0005596060 nova_compute[247421]: 2026-01-26 18:43:53.207 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:53.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:43:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:54.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:43:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:54 np0005596060 nova_compute[247421]: 2026-01-26 18:43:54.934 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:55.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:56.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:43:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:43:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:57.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:43:58 np0005596060 nova_compute[247421]: 2026-01-26 18:43:58.209 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:58 np0005596060 ovn_controller[148842]: 2026-01-26T18:43:58Z|00195|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 26 13:43:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:43:58.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:43:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:43:59 np0005596060 nova_compute[247421]: 2026-01-26 18:43:59.937 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:43:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:43:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:43:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:43:59.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:00.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:01.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:02.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:03 np0005596060 nova_compute[247421]: 2026-01-26 18:44:03.211 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:03.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:44:04 np0005596060 podman[305395]: 2026-01-26 18:44:04.190064381 +0000 UTC m=+0.082650920 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 13:44:04 np0005596060 podman[305396]: 2026-01-26 18:44:04.195610379 +0000 UTC m=+0.085696635 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:44:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:04.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:04 np0005596060 nova_compute[247421]: 2026-01-26 18:44:04.959 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:05.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:06.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:07.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:08 np0005596060 nova_compute[247421]: 2026-01-26 18:44:08.213 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:08.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:09 np0005596060 nova_compute[247421]: 2026-01-26 18:44:09.962 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:09.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:10.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:10 np0005596060 nova_compute[247421]: 2026-01-26 18:44:10.594 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:44:10.595 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:44:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:44:10.596 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:44:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:11.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:12.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:13 np0005596060 nova_compute[247421]: 2026-01-26 18:44:13.215 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:13.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:44:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:44:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:14.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:44:14.770 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:44:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:44:14.771 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:44:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:44:14.771 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:44:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:14 np0005596060 nova_compute[247421]: 2026-01-26 18:44:14.965 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:15.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:16.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:44:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:44:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:44:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:44:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:17.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:18 np0005596060 nova_compute[247421]: 2026-01-26 18:44:18.216 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:18.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev c459eb5d-16ed-4231-8350-c19c585c2701 does not exist
Jan 26 13:44:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4f884381-d454-4568-bc81-ef90e08d5b36 does not exist
Jan 26 13:44:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e8cc2e1b-c9fa-433e-89e4-e411baf6f6cd does not exist
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:44:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:44:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:44:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:44:19 np0005596060 podman[305863]: 2026-01-26 18:44:19.590109672 +0000 UTC m=+0.044972906 container create 340d5f4f262888776fdd1210a35725699da591a96f4659f6739b47f1bf3797f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:44:19 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:44:19.598 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:44:19 np0005596060 systemd[1]: Started libpod-conmon-340d5f4f262888776fdd1210a35725699da591a96f4659f6739b47f1bf3797f6.scope.
Jan 26 13:44:19 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:44:19 np0005596060 podman[305863]: 2026-01-26 18:44:19.57043568 +0000 UTC m=+0.025298934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:44:19 np0005596060 podman[305863]: 2026-01-26 18:44:19.680711179 +0000 UTC m=+0.135574413 container init 340d5f4f262888776fdd1210a35725699da591a96f4659f6739b47f1bf3797f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shockley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:44:19 np0005596060 podman[305863]: 2026-01-26 18:44:19.691078079 +0000 UTC m=+0.145941313 container start 340d5f4f262888776fdd1210a35725699da591a96f4659f6739b47f1bf3797f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:44:19 np0005596060 podman[305863]: 2026-01-26 18:44:19.695097879 +0000 UTC m=+0.149961133 container attach 340d5f4f262888776fdd1210a35725699da591a96f4659f6739b47f1bf3797f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:44:19 np0005596060 sad_shockley[305879]: 167 167
Jan 26 13:44:19 np0005596060 systemd[1]: libpod-340d5f4f262888776fdd1210a35725699da591a96f4659f6739b47f1bf3797f6.scope: Deactivated successfully.
Jan 26 13:44:19 np0005596060 podman[305863]: 2026-01-26 18:44:19.699871939 +0000 UTC m=+0.154735173 container died 340d5f4f262888776fdd1210a35725699da591a96f4659f6739b47f1bf3797f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 13:44:19 np0005596060 systemd[1]: var-lib-containers-storage-overlay-af5515af66898f1d16e34742954c4476d9c659a3be859720e7cbacca590f81d7-merged.mount: Deactivated successfully.
Jan 26 13:44:19 np0005596060 podman[305863]: 2026-01-26 18:44:19.747226944 +0000 UTC m=+0.202090198 container remove 340d5f4f262888776fdd1210a35725699da591a96f4659f6739b47f1bf3797f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:44:19 np0005596060 systemd[1]: libpod-conmon-340d5f4f262888776fdd1210a35725699da591a96f4659f6739b47f1bf3797f6.scope: Deactivated successfully.
Jan 26 13:44:19 np0005596060 podman[305904]: 2026-01-26 18:44:19.91531155 +0000 UTC m=+0.045700895 container create aec1c2da45792b71fb4553a768ca6eff7f78126635a313d0f736e66f404d06a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hawking, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:44:19 np0005596060 systemd[1]: Started libpod-conmon-aec1c2da45792b71fb4553a768ca6eff7f78126635a313d0f736e66f404d06a0.scope.
Jan 26 13:44:19 np0005596060 nova_compute[247421]: 2026-01-26 18:44:19.967 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:19 np0005596060 podman[305904]: 2026-01-26 18:44:19.895290569 +0000 UTC m=+0.025679934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:44:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:19.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:20 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:44:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2423dc6aeb848a89d7dbd10458516d283e80937bb715718e4885f60e4e859ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2423dc6aeb848a89d7dbd10458516d283e80937bb715718e4885f60e4e859ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2423dc6aeb848a89d7dbd10458516d283e80937bb715718e4885f60e4e859ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2423dc6aeb848a89d7dbd10458516d283e80937bb715718e4885f60e4e859ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:20 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2423dc6aeb848a89d7dbd10458516d283e80937bb715718e4885f60e4e859ab/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:20 np0005596060 podman[305904]: 2026-01-26 18:44:20.022562123 +0000 UTC m=+0.152951488 container init aec1c2da45792b71fb4553a768ca6eff7f78126635a313d0f736e66f404d06a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hawking, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 13:44:20 np0005596060 podman[305904]: 2026-01-26 18:44:20.031347213 +0000 UTC m=+0.161736568 container start aec1c2da45792b71fb4553a768ca6eff7f78126635a313d0f736e66f404d06a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hawking, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:44:20 np0005596060 podman[305904]: 2026-01-26 18:44:20.036051591 +0000 UTC m=+0.166440936 container attach aec1c2da45792b71fb4553a768ca6eff7f78126635a313d0f736e66f404d06a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:44:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:20.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:20 np0005596060 modest_hawking[305920]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:44:20 np0005596060 modest_hawking[305920]: --> relative data size: 1.0
Jan 26 13:44:20 np0005596060 modest_hawking[305920]: --> All data devices are unavailable
Jan 26 13:44:20 np0005596060 systemd[1]: libpod-aec1c2da45792b71fb4553a768ca6eff7f78126635a313d0f736e66f404d06a0.scope: Deactivated successfully.
Jan 26 13:44:20 np0005596060 podman[305904]: 2026-01-26 18:44:20.870350508 +0000 UTC m=+1.000739853 container died aec1c2da45792b71fb4553a768ca6eff7f78126635a313d0f736e66f404d06a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 13:44:20 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b2423dc6aeb848a89d7dbd10458516d283e80937bb715718e4885f60e4e859ab-merged.mount: Deactivated successfully.
Jan 26 13:44:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:20 np0005596060 podman[305904]: 2026-01-26 18:44:20.929328044 +0000 UTC m=+1.059717389 container remove aec1c2da45792b71fb4553a768ca6eff7f78126635a313d0f736e66f404d06a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 13:44:20 np0005596060 systemd[1]: libpod-conmon-aec1c2da45792b71fb4553a768ca6eff7f78126635a313d0f736e66f404d06a0.scope: Deactivated successfully.
Jan 26 13:44:21 np0005596060 podman[306091]: 2026-01-26 18:44:21.564275252 +0000 UTC m=+0.037773806 container create dcac2bba242e7b963879945e98b0cdb2bef8655a6389c09f61c35cba7eb417e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_booth, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 13:44:21 np0005596060 systemd[1]: Started libpod-conmon-dcac2bba242e7b963879945e98b0cdb2bef8655a6389c09f61c35cba7eb417e5.scope.
Jan 26 13:44:21 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:44:21 np0005596060 podman[306091]: 2026-01-26 18:44:21.548351803 +0000 UTC m=+0.021850397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:44:21 np0005596060 podman[306091]: 2026-01-26 18:44:21.65377572 +0000 UTC m=+0.127274354 container init dcac2bba242e7b963879945e98b0cdb2bef8655a6389c09f61c35cba7eb417e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_booth, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:44:21 np0005596060 podman[306091]: 2026-01-26 18:44:21.661901194 +0000 UTC m=+0.135399758 container start dcac2bba242e7b963879945e98b0cdb2bef8655a6389c09f61c35cba7eb417e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_booth, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:44:21 np0005596060 relaxed_booth[306108]: 167 167
Jan 26 13:44:21 np0005596060 systemd[1]: libpod-dcac2bba242e7b963879945e98b0cdb2bef8655a6389c09f61c35cba7eb417e5.scope: Deactivated successfully.
Jan 26 13:44:21 np0005596060 conmon[306108]: conmon dcac2bba242e7b963879 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dcac2bba242e7b963879945e98b0cdb2bef8655a6389c09f61c35cba7eb417e5.scope/container/memory.events
Jan 26 13:44:21 np0005596060 podman[306091]: 2026-01-26 18:44:21.665616137 +0000 UTC m=+0.139114781 container attach dcac2bba242e7b963879945e98b0cdb2bef8655a6389c09f61c35cba7eb417e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_booth, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:44:21 np0005596060 podman[306091]: 2026-01-26 18:44:21.669109284 +0000 UTC m=+0.142607848 container died dcac2bba242e7b963879945e98b0cdb2bef8655a6389c09f61c35cba7eb417e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:44:21 np0005596060 systemd[1]: var-lib-containers-storage-overlay-98b864244a1160005879e07addad7f2c702398db71c0009e0c608d6ed57d8180-merged.mount: Deactivated successfully.
Jan 26 13:44:21 np0005596060 podman[306091]: 2026-01-26 18:44:21.711751281 +0000 UTC m=+0.185249845 container remove dcac2bba242e7b963879945e98b0cdb2bef8655a6389c09f61c35cba7eb417e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 13:44:21 np0005596060 systemd[1]: libpod-conmon-dcac2bba242e7b963879945e98b0cdb2bef8655a6389c09f61c35cba7eb417e5.scope: Deactivated successfully.
Jan 26 13:44:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:21 np0005596060 podman[306131]: 2026-01-26 18:44:21.875236682 +0000 UTC m=+0.042307170 container create d13cf5b9a92f70c92575972ef120bdb768b5598a29955999c66ff9a9a615fe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:44:21 np0005596060 systemd[1]: Started libpod-conmon-d13cf5b9a92f70c92575972ef120bdb768b5598a29955999c66ff9a9a615fe72.scope.
Jan 26 13:44:21 np0005596060 podman[306131]: 2026-01-26 18:44:21.858463382 +0000 UTC m=+0.025533890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:44:21 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:44:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1d9028240cc02d59ac5f92844dde6a7d284bbc64d36ed839865bc99e6c39f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1d9028240cc02d59ac5f92844dde6a7d284bbc64d36ed839865bc99e6c39f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1d9028240cc02d59ac5f92844dde6a7d284bbc64d36ed839865bc99e6c39f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:21 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b1d9028240cc02d59ac5f92844dde6a7d284bbc64d36ed839865bc99e6c39f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:21 np0005596060 podman[306131]: 2026-01-26 18:44:21.979208864 +0000 UTC m=+0.146279372 container init d13cf5b9a92f70c92575972ef120bdb768b5598a29955999c66ff9a9a615fe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:44:21 np0005596060 podman[306131]: 2026-01-26 18:44:21.98626885 +0000 UTC m=+0.153339338 container start d13cf5b9a92f70c92575972ef120bdb768b5598a29955999c66ff9a9a615fe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:44:21 np0005596060 podman[306131]: 2026-01-26 18:44:21.989709727 +0000 UTC m=+0.156780215 container attach d13cf5b9a92f70c92575972ef120bdb768b5598a29955999c66ff9a9a615fe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:44:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:21.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:22.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:22 np0005596060 nova_compute[247421]: 2026-01-26 18:44:22.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]: {
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:    "1": [
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:        {
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "devices": [
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "/dev/loop3"
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            ],
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "lv_name": "ceph_lv0",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "lv_size": "7511998464",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "name": "ceph_lv0",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "tags": {
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.cluster_name": "ceph",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.crush_device_class": "",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.encrypted": "0",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.osd_id": "1",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.type": "block",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:                "ceph.vdo": "0"
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            },
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "type": "block",
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:            "vg_name": "ceph_vg0"
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:        }
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]:    ]
Jan 26 13:44:22 np0005596060 eager_hamilton[306148]: }
Jan 26 13:44:22 np0005596060 systemd[1]: libpod-d13cf5b9a92f70c92575972ef120bdb768b5598a29955999c66ff9a9a615fe72.scope: Deactivated successfully.
Jan 26 13:44:22 np0005596060 podman[306131]: 2026-01-26 18:44:22.841893991 +0000 UTC m=+1.008964469 container died d13cf5b9a92f70c92575972ef120bdb768b5598a29955999c66ff9a9a615fe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:44:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4b1d9028240cc02d59ac5f92844dde6a7d284bbc64d36ed839865bc99e6c39f1-merged.mount: Deactivated successfully.
Jan 26 13:44:22 np0005596060 podman[306131]: 2026-01-26 18:44:22.902368904 +0000 UTC m=+1.069439392 container remove d13cf5b9a92f70c92575972ef120bdb768b5598a29955999c66ff9a9a615fe72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:44:22 np0005596060 systemd[1]: libpod-conmon-d13cf5b9a92f70c92575972ef120bdb768b5598a29955999c66ff9a9a615fe72.scope: Deactivated successfully.
Jan 26 13:44:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:23 np0005596060 nova_compute[247421]: 2026-01-26 18:44:23.217 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:23 np0005596060 podman[306312]: 2026-01-26 18:44:23.536989704 +0000 UTC m=+0.037640092 container create 3f43e6c3c5c029b45cdbbfc7ed0611a68516bc1e2939246b56324ab6944d4732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:44:23 np0005596060 systemd[1]: Started libpod-conmon-3f43e6c3c5c029b45cdbbfc7ed0611a68516bc1e2939246b56324ab6944d4732.scope.
Jan 26 13:44:23 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:44:23 np0005596060 podman[306312]: 2026-01-26 18:44:23.605646672 +0000 UTC m=+0.106297070 container init 3f43e6c3c5c029b45cdbbfc7ed0611a68516bc1e2939246b56324ab6944d4732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:44:23 np0005596060 podman[306312]: 2026-01-26 18:44:23.613109719 +0000 UTC m=+0.113760097 container start 3f43e6c3c5c029b45cdbbfc7ed0611a68516bc1e2939246b56324ab6944d4732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 13:44:23 np0005596060 podman[306312]: 2026-01-26 18:44:23.519952498 +0000 UTC m=+0.020602896 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:44:23 np0005596060 podman[306312]: 2026-01-26 18:44:23.615730945 +0000 UTC m=+0.116381323 container attach 3f43e6c3c5c029b45cdbbfc7ed0611a68516bc1e2939246b56324ab6944d4732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:44:23 np0005596060 exciting_banach[306328]: 167 167
Jan 26 13:44:23 np0005596060 systemd[1]: libpod-3f43e6c3c5c029b45cdbbfc7ed0611a68516bc1e2939246b56324ab6944d4732.scope: Deactivated successfully.
Jan 26 13:44:23 np0005596060 podman[306312]: 2026-01-26 18:44:23.617830507 +0000 UTC m=+0.118480905 container died 3f43e6c3c5c029b45cdbbfc7ed0611a68516bc1e2939246b56324ab6944d4732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:44:23 np0005596060 systemd[1]: var-lib-containers-storage-overlay-62d1def83f4bcfb88fefb78238e55e2f4d31dfbf1333e28d62fd4057ebea4c15-merged.mount: Deactivated successfully.
Jan 26 13:44:23 np0005596060 podman[306312]: 2026-01-26 18:44:23.650236848 +0000 UTC m=+0.150887226 container remove 3f43e6c3c5c029b45cdbbfc7ed0611a68516bc1e2939246b56324ab6944d4732 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_banach, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:44:23 np0005596060 systemd[1]: libpod-conmon-3f43e6c3c5c029b45cdbbfc7ed0611a68516bc1e2939246b56324ab6944d4732.scope: Deactivated successfully.
Jan 26 13:44:23 np0005596060 podman[306354]: 2026-01-26 18:44:23.867554416 +0000 UTC m=+0.050970496 container create 1e1d2c996ef966581dd53b655052fd5aadbd70cc73d06b2de636a06bca05282a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:44:23 np0005596060 systemd[1]: Started libpod-conmon-1e1d2c996ef966581dd53b655052fd5aadbd70cc73d06b2de636a06bca05282a.scope.
Jan 26 13:44:23 np0005596060 podman[306354]: 2026-01-26 18:44:23.846328085 +0000 UTC m=+0.029744215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:44:23 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:44:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40265a49fb96cdff8dfc4614ff51dccce713a2eba2c1f53af9c4f7733dfee65/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40265a49fb96cdff8dfc4614ff51dccce713a2eba2c1f53af9c4f7733dfee65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40265a49fb96cdff8dfc4614ff51dccce713a2eba2c1f53af9c4f7733dfee65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:23 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40265a49fb96cdff8dfc4614ff51dccce713a2eba2c1f53af9c4f7733dfee65/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:44:23 np0005596060 podman[306354]: 2026-01-26 18:44:23.973551078 +0000 UTC m=+0.156967188 container init 1e1d2c996ef966581dd53b655052fd5aadbd70cc73d06b2de636a06bca05282a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 26 13:44:23 np0005596060 podman[306354]: 2026-01-26 18:44:23.981874746 +0000 UTC m=+0.165290826 container start 1e1d2c996ef966581dd53b655052fd5aadbd70cc73d06b2de636a06bca05282a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:44:23 np0005596060 podman[306354]: 2026-01-26 18:44:23.985163418 +0000 UTC m=+0.168579578 container attach 1e1d2c996ef966581dd53b655052fd5aadbd70cc73d06b2de636a06bca05282a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:44:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:23.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:24.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:24 np0005596060 blissful_golick[306370]: {
Jan 26 13:44:24 np0005596060 blissful_golick[306370]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:44:24 np0005596060 blissful_golick[306370]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:44:24 np0005596060 blissful_golick[306370]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:44:24 np0005596060 blissful_golick[306370]:        "osd_id": 1,
Jan 26 13:44:24 np0005596060 blissful_golick[306370]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:44:24 np0005596060 blissful_golick[306370]:        "type": "bluestore"
Jan 26 13:44:24 np0005596060 blissful_golick[306370]:    }
Jan 26 13:44:24 np0005596060 blissful_golick[306370]: }
Jan 26 13:44:24 np0005596060 systemd[1]: libpod-1e1d2c996ef966581dd53b655052fd5aadbd70cc73d06b2de636a06bca05282a.scope: Deactivated successfully.
Jan 26 13:44:24 np0005596060 podman[306354]: 2026-01-26 18:44:24.845410865 +0000 UTC m=+1.028826945 container died 1e1d2c996ef966581dd53b655052fd5aadbd70cc73d06b2de636a06bca05282a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:44:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b40265a49fb96cdff8dfc4614ff51dccce713a2eba2c1f53af9c4f7733dfee65-merged.mount: Deactivated successfully.
Jan 26 13:44:24 np0005596060 podman[306354]: 2026-01-26 18:44:24.897849567 +0000 UTC m=+1.081265647 container remove 1e1d2c996ef966581dd53b655052fd5aadbd70cc73d06b2de636a06bca05282a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:44:24 np0005596060 systemd[1]: libpod-conmon-1e1d2c996ef966581dd53b655052fd5aadbd70cc73d06b2de636a06bca05282a.scope: Deactivated successfully.
Jan 26 13:44:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:44:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:44:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev dbcff428-6eb1-40b0-b611-bb2f33d78de5 does not exist
Jan 26 13:44:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 65ae99b9-32f1-4a77-8f1f-f9b5b2ba3c5b does not exist
Jan 26 13:44:24 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3eb73622-1bf6-452c-9e63-f1b73f1b75e1 does not exist
Jan 26 13:44:25 np0005596060 nova_compute[247421]: 2026-01-26 18:44:24.999 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:25 np0005596060 nova_compute[247421]: 2026-01-26 18:44:25.654 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:44:25 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:25 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:44:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:25.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:26.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:28.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:28 np0005596060 nova_compute[247421]: 2026-01-26 18:44:28.219 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:28.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:28 np0005596060 nova_compute[247421]: 2026-01-26 18:44:28.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:44:28 np0005596060 nova_compute[247421]: 2026-01-26 18:44:28.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:44:28 np0005596060 nova_compute[247421]: 2026-01-26 18:44:28.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:44:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:30.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:30 np0005596060 nova_compute[247421]: 2026-01-26 18:44:30.056 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:30.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:30 np0005596060 nova_compute[247421]: 2026-01-26 18:44:30.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:44:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:31 np0005596060 nova_compute[247421]: 2026-01-26 18:44:31.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:44:31 np0005596060 nova_compute[247421]: 2026-01-26 18:44:31.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:44:31 np0005596060 nova_compute[247421]: 2026-01-26 18:44:31.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:44:31 np0005596060 nova_compute[247421]: 2026-01-26 18:44:31.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:44:31 np0005596060 nova_compute[247421]: 2026-01-26 18:44:31.678 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:44:31 np0005596060 nova_compute[247421]: 2026-01-26 18:44:31.678 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:44:31 np0005596060 nova_compute[247421]: 2026-01-26 18:44:31.678 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:44:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:32.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:44:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3716802241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:44:32 np0005596060 nova_compute[247421]: 2026-01-26 18:44:32.138 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:44:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:32.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:32 np0005596060 nova_compute[247421]: 2026-01-26 18:44:32.334 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:44:32 np0005596060 nova_compute[247421]: 2026-01-26 18:44:32.335 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4603MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:44:32 np0005596060 nova_compute[247421]: 2026-01-26 18:44:32.335 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:44:32 np0005596060 nova_compute[247421]: 2026-01-26 18:44:32.335 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:44:32 np0005596060 nova_compute[247421]: 2026-01-26 18:44:32.399 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:44:32 np0005596060 nova_compute[247421]: 2026-01-26 18:44:32.400 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:44:32 np0005596060 nova_compute[247421]: 2026-01-26 18:44:32.529 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:44:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:44:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1943177923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:44:32 np0005596060 nova_compute[247421]: 2026-01-26 18:44:32.981 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:44:32 np0005596060 nova_compute[247421]: 2026-01-26 18:44:32.987 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:44:33 np0005596060 nova_compute[247421]: 2026-01-26 18:44:33.112 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:44:33 np0005596060 nova_compute[247421]: 2026-01-26 18:44:33.114 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:44:33 np0005596060 nova_compute[247421]: 2026-01-26 18:44:33.114 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:44:33 np0005596060 nova_compute[247421]: 2026-01-26 18:44:33.223 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:34.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:44:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:34.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:44:34 np0005596060 podman[306555]: 2026-01-26 18:44:34.801424092 +0000 UTC m=+0.062302620 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 13:44:34 np0005596060 podman[306556]: 2026-01-26 18:44:34.831031003 +0000 UTC m=+0.091716237 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 26 13:44:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:35 np0005596060 nova_compute[247421]: 2026-01-26 18:44:35.057 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:35 np0005596060 nova_compute[247421]: 2026-01-26 18:44:35.114 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:44:35 np0005596060 nova_compute[247421]: 2026-01-26 18:44:35.114 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:44:35 np0005596060 nova_compute[247421]: 2026-01-26 18:44:35.114 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:44:35 np0005596060 nova_compute[247421]: 2026-01-26 18:44:35.366 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:44:35 np0005596060 nova_compute[247421]: 2026-01-26 18:44:35.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:44:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:36.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:36.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:38.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:38 np0005596060 nova_compute[247421]: 2026-01-26 18:44:38.225 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:38.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:40 np0005596060 nova_compute[247421]: 2026-01-26 18:44:40.060 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:40.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:40.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:41 np0005596060 nova_compute[247421]: 2026-01-26 18:44:41.647 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:44:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:42.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:42.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:43 np0005596060 nova_compute[247421]: 2026-01-26 18:44:43.226 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:44.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:44:44
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'images', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'vms']
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:44:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:44.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 41 MiB data, 378 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:44:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:44:45 np0005596060 nova_compute[247421]: 2026-01-26 18:44:45.062 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:46.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:46.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 56 MiB data, 373 MiB used, 21 GiB / 21 GiB avail; 597 B/s rd, 463 KiB/s wr, 1 op/s
Jan 26 13:44:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:48.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:48 np0005596060 nova_compute[247421]: 2026-01-26 18:44:48.228 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:48.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:44:50 np0005596060 nova_compute[247421]: 2026-01-26 18:44:50.066 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:50.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:50.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:44:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:44:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:52.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:44:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:44:53 np0005596060 nova_compute[247421]: 2026-01-26 18:44:53.231 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:54.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 13:44:55 np0005596060 nova_compute[247421]: 2026-01-26 18:44:55.069 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:56.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:44:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 510 KiB/s rd, 1.8 MiB/s wr, 45 op/s
Jan 26 13:44:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:44:58.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:58 np0005596060 nova_compute[247421]: 2026-01-26 18:44:58.233 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:44:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:44:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:44:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:44:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:44:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 98 op/s
Jan 26 13:45:00 np0005596060 nova_compute[247421]: 2026-01-26 18:45:00.072 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:00.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:45:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:02.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:02.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:45:03 np0005596060 nova_compute[247421]: 2026-01-26 18:45:03.235 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:45:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:04.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:04.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:45:05 np0005596060 nova_compute[247421]: 2026-01-26 18:45:05.112 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:05 np0005596060 podman[306716]: 2026-01-26 18:45:05.786898936 +0000 UTC m=+0.049741495 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:45:05 np0005596060 podman[306717]: 2026-01-26 18:45:05.813883942 +0000 UTC m=+0.076970537 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 26 13:45:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:06.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:45:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:06.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:45:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 88 MiB data, 389 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 159 KiB/s wr, 80 op/s
Jan 26 13:45:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:08.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:08 np0005596060 nova_compute[247421]: 2026-01-26 18:45:08.236 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:08.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 98 MiB data, 404 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 76 op/s
Jan 26 13:45:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:10.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:10 np0005596060 nova_compute[247421]: 2026-01-26 18:45:10.116 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:10.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 98 MiB data, 404 MiB used, 21 GiB / 21 GiB avail; 31 KiB/s rd, 1.1 MiB/s wr, 20 op/s
Jan 26 13:45:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:12.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:12.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 13:45:13 np0005596060 nova_compute[247421]: 2026-01-26 18:45:13.238 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:45:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:45:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:45:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:14.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:45:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:14.771 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:45:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:14.772 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:45:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:14.772 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:45:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 13:45:15 np0005596060 nova_compute[247421]: 2026-01-26 18:45:15.118 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:16.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:16.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 121 MiB data, 418 MiB used, 21 GiB / 21 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 13:45:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:45:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:18.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:45:18 np0005596060 nova_compute[247421]: 2026-01-26 18:45:18.240 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:18.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 283 KiB/s rd, 2.0 MiB/s wr, 58 op/s
Jan 26 13:45:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:20.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:20 np0005596060 nova_compute[247421]: 2026-01-26 18:45:20.122 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:20.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 275 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Jan 26 13:45:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:22.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:22.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 275 KiB/s rd, 1.1 MiB/s wr, 45 op/s
Jan 26 13:45:23 np0005596060 nova_compute[247421]: 2026-01-26 18:45:23.241 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:23 np0005596060 nova_compute[247421]: 2026-01-26 18:45:23.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:45:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:45:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:24.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:45:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:45:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:24.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:45:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Jan 26 13:45:25 np0005596060 nova_compute[247421]: 2026-01-26 18:45:25.143 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:25 np0005596060 nova_compute[247421]: 2026-01-26 18:45:25.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:45:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:45:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:45:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:45:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:45:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 26 13:45:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 13:45:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 26 13:45:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 13:45:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:26.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:26.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:45:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:45:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:45:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:45:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:45:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:45:27 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b93df6e0-18c4-46b2-8e9f-8db3e18d837b does not exist
Jan 26 13:45:27 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 942df59b-e8ce-4079-8b8c-e015f53df6b0 does not exist
Jan 26 13:45:27 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5bf44b2e-be35-42bf-a78d-0de4663d2703 does not exist
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 26 13:45:27 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:45:27 np0005596060 podman[307094]: 2026-01-26 18:45:27.60321586 +0000 UTC m=+0.046729021 container create ac708ac1938f124000960e2a5c06f7e9ed9d9058c44c75f9fda2d395df3ebd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:45:27 np0005596060 systemd[1]: Started libpod-conmon-ac708ac1938f124000960e2a5c06f7e9ed9d9058c44c75f9fda2d395df3ebd2a.scope.
Jan 26 13:45:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:45:27 np0005596060 podman[307094]: 2026-01-26 18:45:27.675703683 +0000 UTC m=+0.119216894 container init ac708ac1938f124000960e2a5c06f7e9ed9d9058c44c75f9fda2d395df3ebd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 26 13:45:27 np0005596060 podman[307094]: 2026-01-26 18:45:27.582712587 +0000 UTC m=+0.026225778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:45:27 np0005596060 podman[307094]: 2026-01-26 18:45:27.682833832 +0000 UTC m=+0.126347013 container start ac708ac1938f124000960e2a5c06f7e9ed9d9058c44c75f9fda2d395df3ebd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:45:27 np0005596060 podman[307094]: 2026-01-26 18:45:27.685743405 +0000 UTC m=+0.129256596 container attach ac708ac1938f124000960e2a5c06f7e9ed9d9058c44c75f9fda2d395df3ebd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 13:45:27 np0005596060 systemd[1]: libpod-ac708ac1938f124000960e2a5c06f7e9ed9d9058c44c75f9fda2d395df3ebd2a.scope: Deactivated successfully.
Jan 26 13:45:27 np0005596060 epic_banzai[307110]: 167 167
Jan 26 13:45:27 np0005596060 conmon[307110]: conmon ac708ac1938f12400096 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac708ac1938f124000960e2a5c06f7e9ed9d9058c44c75f9fda2d395df3ebd2a.scope/container/memory.events
Jan 26 13:45:27 np0005596060 podman[307094]: 2026-01-26 18:45:27.68956449 +0000 UTC m=+0.133077641 container died ac708ac1938f124000960e2a5c06f7e9ed9d9058c44c75f9fda2d395df3ebd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:45:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b1abe93d74b06b445e953071003ebfdb76a77008660fe366b5db58fd627fd5c4-merged.mount: Deactivated successfully.
Jan 26 13:45:27 np0005596060 podman[307094]: 2026-01-26 18:45:27.724000752 +0000 UTC m=+0.167513903 container remove ac708ac1938f124000960e2a5c06f7e9ed9d9058c44c75f9fda2d395df3ebd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 13:45:27 np0005596060 systemd[1]: libpod-conmon-ac708ac1938f124000960e2a5c06f7e9ed9d9058c44c75f9fda2d395df3ebd2a.scope: Deactivated successfully.
Jan 26 13:45:27 np0005596060 podman[307132]: 2026-01-26 18:45:27.881822261 +0000 UTC m=+0.044950746 container create cb171e75ddb51e57a9c370a9b76e3d02b915401af3ff2b138dfa7b1cc7c4071b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:45:27 np0005596060 systemd[1]: Started libpod-conmon-cb171e75ddb51e57a9c370a9b76e3d02b915401af3ff2b138dfa7b1cc7c4071b.scope.
Jan 26 13:45:27 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:45:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4988556e03127bf055c3b80a89fc4646014e0c1027e0afdf78a066cee05809/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4988556e03127bf055c3b80a89fc4646014e0c1027e0afdf78a066cee05809/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4988556e03127bf055c3b80a89fc4646014e0c1027e0afdf78a066cee05809/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4988556e03127bf055c3b80a89fc4646014e0c1027e0afdf78a066cee05809/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:27 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f4988556e03127bf055c3b80a89fc4646014e0c1027e0afdf78a066cee05809/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:27 np0005596060 podman[307132]: 2026-01-26 18:45:27.958852499 +0000 UTC m=+0.121981034 container init cb171e75ddb51e57a9c370a9b76e3d02b915401af3ff2b138dfa7b1cc7c4071b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euler, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 26 13:45:27 np0005596060 podman[307132]: 2026-01-26 18:45:27.867013401 +0000 UTC m=+0.030141916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:45:27 np0005596060 podman[307132]: 2026-01-26 18:45:27.964582172 +0000 UTC m=+0.127710657 container start cb171e75ddb51e57a9c370a9b76e3d02b915401af3ff2b138dfa7b1cc7c4071b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euler, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:45:27 np0005596060 podman[307132]: 2026-01-26 18:45:27.96810884 +0000 UTC m=+0.131237335 container attach cb171e75ddb51e57a9c370a9b76e3d02b915401af3ff2b138dfa7b1cc7c4071b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 13:45:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:28.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:28 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:45:28 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:45:28 np0005596060 nova_compute[247421]: 2026-01-26 18:45:28.243 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:28.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:28 np0005596060 nervous_euler[307149]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:45:28 np0005596060 nervous_euler[307149]: --> relative data size: 1.0
Jan 26 13:45:28 np0005596060 nervous_euler[307149]: --> All data devices are unavailable
Jan 26 13:45:28 np0005596060 systemd[1]: libpod-cb171e75ddb51e57a9c370a9b76e3d02b915401af3ff2b138dfa7b1cc7c4071b.scope: Deactivated successfully.
Jan 26 13:45:28 np0005596060 podman[307132]: 2026-01-26 18:45:28.819884754 +0000 UTC m=+0.983013239 container died cb171e75ddb51e57a9c370a9b76e3d02b915401af3ff2b138dfa7b1cc7c4071b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 13:45:28 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8f4988556e03127bf055c3b80a89fc4646014e0c1027e0afdf78a066cee05809-merged.mount: Deactivated successfully.
Jan 26 13:45:28 np0005596060 podman[307132]: 2026-01-26 18:45:28.871924217 +0000 UTC m=+1.035052702 container remove cb171e75ddb51e57a9c370a9b76e3d02b915401af3ff2b138dfa7b1cc7c4071b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:45:28 np0005596060 systemd[1]: libpod-conmon-cb171e75ddb51e57a9c370a9b76e3d02b915401af3ff2b138dfa7b1cc7c4071b.scope: Deactivated successfully.
Jan 26 13:45:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Jan 26 13:45:29 np0005596060 podman[307316]: 2026-01-26 18:45:29.532470775 +0000 UTC m=+0.040693669 container create fc6ed565ad7d982d2496339898076d9c5f885934cb5153e72df9671413041df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:45:29 np0005596060 systemd[1]: Started libpod-conmon-fc6ed565ad7d982d2496339898076d9c5f885934cb5153e72df9671413041df8.scope.
Jan 26 13:45:29 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:45:29 np0005596060 podman[307316]: 2026-01-26 18:45:29.514201588 +0000 UTC m=+0.022424502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:45:29 np0005596060 podman[307316]: 2026-01-26 18:45:29.611526294 +0000 UTC m=+0.119749238 container init fc6ed565ad7d982d2496339898076d9c5f885934cb5153e72df9671413041df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:45:29 np0005596060 podman[307316]: 2026-01-26 18:45:29.618890608 +0000 UTC m=+0.127113492 container start fc6ed565ad7d982d2496339898076d9c5f885934cb5153e72df9671413041df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 26 13:45:29 np0005596060 podman[307316]: 2026-01-26 18:45:29.622471788 +0000 UTC m=+0.130694742 container attach fc6ed565ad7d982d2496339898076d9c5f885934cb5153e72df9671413041df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:45:29 np0005596060 jolly_vaughan[307332]: 167 167
Jan 26 13:45:29 np0005596060 systemd[1]: libpod-fc6ed565ad7d982d2496339898076d9c5f885934cb5153e72df9671413041df8.scope: Deactivated successfully.
Jan 26 13:45:29 np0005596060 podman[307316]: 2026-01-26 18:45:29.624330864 +0000 UTC m=+0.132553778 container died fc6ed565ad7d982d2496339898076d9c5f885934cb5153e72df9671413041df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:45:29 np0005596060 systemd[1]: var-lib-containers-storage-overlay-38c4d4dcd8169cfc61e111e392dd95d6d03ecadd02aa91772b2c502f955b1ac6-merged.mount: Deactivated successfully.
Jan 26 13:45:29 np0005596060 nova_compute[247421]: 2026-01-26 18:45:29.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:45:29 np0005596060 nova_compute[247421]: 2026-01-26 18:45:29.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:45:29 np0005596060 nova_compute[247421]: 2026-01-26 18:45:29.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:45:29 np0005596060 podman[307316]: 2026-01-26 18:45:29.660139159 +0000 UTC m=+0.168362053 container remove fc6ed565ad7d982d2496339898076d9c5f885934cb5153e72df9671413041df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_vaughan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:45:29 np0005596060 systemd[1]: libpod-conmon-fc6ed565ad7d982d2496339898076d9c5f885934cb5153e72df9671413041df8.scope: Deactivated successfully.
Jan 26 13:45:29 np0005596060 podman[307354]: 2026-01-26 18:45:29.814989684 +0000 UTC m=+0.040401422 container create f841d6d787ab936e9997415ebdf871a7e22c19bb20f2809eeef1e81dbdc25956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 26 13:45:29 np0005596060 systemd[1]: Started libpod-conmon-f841d6d787ab936e9997415ebdf871a7e22c19bb20f2809eeef1e81dbdc25956.scope.
Jan 26 13:45:29 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:45:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1916cb913a9f488a9554d94c91bcfd3b4cabcc64459604b6d6bbb512bb4bd3fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1916cb913a9f488a9554d94c91bcfd3b4cabcc64459604b6d6bbb512bb4bd3fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1916cb913a9f488a9554d94c91bcfd3b4cabcc64459604b6d6bbb512bb4bd3fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:29 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1916cb913a9f488a9554d94c91bcfd3b4cabcc64459604b6d6bbb512bb4bd3fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:29 np0005596060 podman[307354]: 2026-01-26 18:45:29.887799546 +0000 UTC m=+0.113211304 container init f841d6d787ab936e9997415ebdf871a7e22c19bb20f2809eeef1e81dbdc25956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 13:45:29 np0005596060 podman[307354]: 2026-01-26 18:45:29.798882231 +0000 UTC m=+0.024293989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:45:29 np0005596060 podman[307354]: 2026-01-26 18:45:29.896577846 +0000 UTC m=+0.121989584 container start f841d6d787ab936e9997415ebdf871a7e22c19bb20f2809eeef1e81dbdc25956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 13:45:29 np0005596060 podman[307354]: 2026-01-26 18:45:29.899503559 +0000 UTC m=+0.124915297 container attach f841d6d787ab936e9997415ebdf871a7e22c19bb20f2809eeef1e81dbdc25956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 26 13:45:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:30.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:30 np0005596060 nova_compute[247421]: 2026-01-26 18:45:30.146 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:30.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]: {
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:    "1": [
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:        {
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "devices": [
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "/dev/loop3"
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            ],
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "lv_name": "ceph_lv0",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "lv_size": "7511998464",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "name": "ceph_lv0",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "tags": {
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.cluster_name": "ceph",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.crush_device_class": "",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.encrypted": "0",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.osd_id": "1",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.type": "block",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:                "ceph.vdo": "0"
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            },
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "type": "block",
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:            "vg_name": "ceph_vg0"
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:        }
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]:    ]
Jan 26 13:45:30 np0005596060 nervous_mcnulty[307371]: }
Jan 26 13:45:30 np0005596060 systemd[1]: libpod-f841d6d787ab936e9997415ebdf871a7e22c19bb20f2809eeef1e81dbdc25956.scope: Deactivated successfully.
Jan 26 13:45:30 np0005596060 podman[307354]: 2026-01-26 18:45:30.681941648 +0000 UTC m=+0.907353386 container died f841d6d787ab936e9997415ebdf871a7e22c19bb20f2809eeef1e81dbdc25956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 13:45:30 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1916cb913a9f488a9554d94c91bcfd3b4cabcc64459604b6d6bbb512bb4bd3fd-merged.mount: Deactivated successfully.
Jan 26 13:45:30 np0005596060 podman[307354]: 2026-01-26 18:45:30.739123479 +0000 UTC m=+0.964535217 container remove f841d6d787ab936e9997415ebdf871a7e22c19bb20f2809eeef1e81dbdc25956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_mcnulty, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 13:45:30 np0005596060 systemd[1]: libpod-conmon-f841d6d787ab936e9997415ebdf871a7e22c19bb20f2809eeef1e81dbdc25956.scope: Deactivated successfully.
Jan 26 13:45:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 26 13:45:31 np0005596060 podman[307536]: 2026-01-26 18:45:31.320129187 +0000 UTC m=+0.040117435 container create d6e4076c6d8d26fc509a7d9da09394c27acb704fb71b6cc62757692e9092fac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:45:31 np0005596060 systemd[1]: Started libpod-conmon-d6e4076c6d8d26fc509a7d9da09394c27acb704fb71b6cc62757692e9092fac2.scope.
Jan 26 13:45:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:45:31 np0005596060 podman[307536]: 2026-01-26 18:45:31.380141559 +0000 UTC m=+0.100129827 container init d6e4076c6d8d26fc509a7d9da09394c27acb704fb71b6cc62757692e9092fac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:45:31 np0005596060 podman[307536]: 2026-01-26 18:45:31.385859982 +0000 UTC m=+0.105848230 container start d6e4076c6d8d26fc509a7d9da09394c27acb704fb71b6cc62757692e9092fac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:45:31 np0005596060 podman[307536]: 2026-01-26 18:45:31.388748254 +0000 UTC m=+0.108736502 container attach d6e4076c6d8d26fc509a7d9da09394c27acb704fb71b6cc62757692e9092fac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 13:45:31 np0005596060 naughty_noether[307552]: 167 167
Jan 26 13:45:31 np0005596060 systemd[1]: libpod-d6e4076c6d8d26fc509a7d9da09394c27acb704fb71b6cc62757692e9092fac2.scope: Deactivated successfully.
Jan 26 13:45:31 np0005596060 podman[307536]: 2026-01-26 18:45:31.390503258 +0000 UTC m=+0.110491496 container died d6e4076c6d8d26fc509a7d9da09394c27acb704fb71b6cc62757692e9092fac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:45:31 np0005596060 podman[307536]: 2026-01-26 18:45:31.303491911 +0000 UTC m=+0.023480179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:45:31 np0005596060 systemd[1]: var-lib-containers-storage-overlay-8f2f60b2c9d1b5d5d8ba65ebc6eafd97d5fc0095c277f513b1191c37e7706e4f-merged.mount: Deactivated successfully.
Jan 26 13:45:31 np0005596060 podman[307536]: 2026-01-26 18:45:31.423142525 +0000 UTC m=+0.143130773 container remove d6e4076c6d8d26fc509a7d9da09394c27acb704fb71b6cc62757692e9092fac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:45:31 np0005596060 systemd[1]: libpod-conmon-d6e4076c6d8d26fc509a7d9da09394c27acb704fb71b6cc62757692e9092fac2.scope: Deactivated successfully.
Jan 26 13:45:31 np0005596060 podman[307575]: 2026-01-26 18:45:31.57519389 +0000 UTC m=+0.041265684 container create 24b97f4aa68656c3aac927293f123a4d7d364f038cb26ab882ad834d4a8403c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 26 13:45:31 np0005596060 systemd[1]: Started libpod-conmon-24b97f4aa68656c3aac927293f123a4d7d364f038cb26ab882ad834d4a8403c2.scope.
Jan 26 13:45:31 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:45:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e809553cf569c9e9d255464387dc22582d9f743f048835da1e2f49ec624e8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e809553cf569c9e9d255464387dc22582d9f743f048835da1e2f49ec624e8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e809553cf569c9e9d255464387dc22582d9f743f048835da1e2f49ec624e8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:31 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e809553cf569c9e9d255464387dc22582d9f743f048835da1e2f49ec624e8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:31 np0005596060 podman[307575]: 2026-01-26 18:45:31.557252161 +0000 UTC m=+0.023323965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:45:31 np0005596060 podman[307575]: 2026-01-26 18:45:31.654835073 +0000 UTC m=+0.120906897 container init 24b97f4aa68656c3aac927293f123a4d7d364f038cb26ab882ad834d4a8403c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:45:31 np0005596060 podman[307575]: 2026-01-26 18:45:31.663094989 +0000 UTC m=+0.129166803 container start 24b97f4aa68656c3aac927293f123a4d7d364f038cb26ab882ad834d4a8403c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:45:31 np0005596060 podman[307575]: 2026-01-26 18:45:31.666868254 +0000 UTC m=+0.132940248 container attach 24b97f4aa68656c3aac927293f123a4d7d364f038cb26ab882ad834d4a8403c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:45:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:32.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:32.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:32 np0005596060 suspicious_ride[307591]: {
Jan 26 13:45:32 np0005596060 suspicious_ride[307591]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:45:32 np0005596060 suspicious_ride[307591]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:45:32 np0005596060 suspicious_ride[307591]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:45:32 np0005596060 suspicious_ride[307591]:        "osd_id": 1,
Jan 26 13:45:32 np0005596060 suspicious_ride[307591]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:45:32 np0005596060 suspicious_ride[307591]:        "type": "bluestore"
Jan 26 13:45:32 np0005596060 suspicious_ride[307591]:    }
Jan 26 13:45:32 np0005596060 suspicious_ride[307591]: }
Jan 26 13:45:32 np0005596060 systemd[1]: libpod-24b97f4aa68656c3aac927293f123a4d7d364f038cb26ab882ad834d4a8403c2.scope: Deactivated successfully.
Jan 26 13:45:32 np0005596060 podman[307575]: 2026-01-26 18:45:32.501479798 +0000 UTC m=+0.967551592 container died 24b97f4aa68656c3aac927293f123a4d7d364f038cb26ab882ad834d4a8403c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:45:32 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b1e809553cf569c9e9d255464387dc22582d9f743f048835da1e2f49ec624e8a-merged.mount: Deactivated successfully.
Jan 26 13:45:32 np0005596060 podman[307575]: 2026-01-26 18:45:32.553458049 +0000 UTC m=+1.019529853 container remove 24b97f4aa68656c3aac927293f123a4d7d364f038cb26ab882ad834d4a8403c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ride, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:45:32 np0005596060 systemd[1]: libpod-conmon-24b97f4aa68656c3aac927293f123a4d7d364f038cb26ab882ad834d4a8403c2.scope: Deactivated successfully.
Jan 26 13:45:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:45:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:45:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:45:32 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:45:32 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 6c288e53-618b-43a9-b37c-0a04fdcd8dfd does not exist
Jan 26 13:45:32 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 84ed1aa9-9a91-4472-b23a-133e5c22464f does not exist
Jan 26 13:45:32 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 9c6e7142-7530-499c-ab50-7b49c4d78a55 does not exist
Jan 26 13:45:32 np0005596060 nova_compute[247421]: 2026-01-26 18:45:32.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:45:32 np0005596060 nova_compute[247421]: 2026-01-26 18:45:32.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:45:32 np0005596060 nova_compute[247421]: 2026-01-26 18:45:32.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:45:32 np0005596060 nova_compute[247421]: 2026-01-26 18:45:32.711 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:45:32 np0005596060 nova_compute[247421]: 2026-01-26 18:45:32.712 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:45:32 np0005596060 nova_compute[247421]: 2026-01-26 18:45:32.712 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:45:32 np0005596060 nova_compute[247421]: 2026-01-26 18:45:32.712 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:45:32 np0005596060 nova_compute[247421]: 2026-01-26 18:45:32.712 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:45:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 16 KiB/s rd, 19 KiB/s wr, 8 op/s
Jan 26 13:45:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Jan 26 13:45:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Jan 26 13:45:33 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Jan 26 13:45:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:45:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966455072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.151 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.245 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.304 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.305 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4641MB free_disk=20.942737579345703GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.305 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.305 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.370 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.371 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.392 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:45:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:45:33 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:45:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:45:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2925099549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.842 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.847 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.868 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.870 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:45:33 np0005596060 nova_compute[247421]: 2026-01-26 18:45:33.870 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:45:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:34.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:45:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:34.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:45:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Jan 26 13:45:34 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Jan 26 13:45:34 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Jan 26 13:45:34 np0005596060 nova_compute[247421]: 2026-01-26 18:45:34.872 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:45:34 np0005596060 nova_compute[247421]: 2026-01-26 18:45:34.872 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:45:34 np0005596060 nova_compute[247421]: 2026-01-26 18:45:34.872 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:45:34 np0005596060 nova_compute[247421]: 2026-01-26 18:45:34.922 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:45:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 28 KiB/s wr, 12 op/s
Jan 26 13:45:35 np0005596060 nova_compute[247421]: 2026-01-26 18:45:35.185 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Jan 26 13:45:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Jan 26 13:45:35 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Jan 26 13:45:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:36.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:36.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:36 np0005596060 podman[307719]: 2026-01-26 18:45:36.801080776 +0000 UTC m=+0.067177772 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 13:45:36 np0005596060 podman[307720]: 2026-01-26 18:45:36.836094061 +0000 UTC m=+0.099077139 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:45:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 173 MiB data, 451 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.4 MiB/s wr, 115 op/s
Jan 26 13:45:37 np0005596060 nova_compute[247421]: 2026-01-26 18:45:37.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:45:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:45:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:38.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:45:38 np0005596060 nova_compute[247421]: 2026-01-26 18:45:38.247 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:38.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 200 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 161 op/s
Jan 26 13:45:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:40.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:40 np0005596060 nova_compute[247421]: 2026-01-26 18:45:40.188 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:45:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:40.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:45:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 200 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 6.0 MiB/s rd, 5.9 MiB/s wr, 122 op/s
Jan 26 13:45:41 np0005596060 nova_compute[247421]: 2026-01-26 18:45:41.337 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Acquiring lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:45:41 np0005596060 nova_compute[247421]: 2026-01-26 18:45:41.337 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:45:41 np0005596060 nova_compute[247421]: 2026-01-26 18:45:41.356 247428 DEBUG nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:45:41 np0005596060 nova_compute[247421]: 2026-01-26 18:45:41.425 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:45:41 np0005596060 nova_compute[247421]: 2026-01-26 18:45:41.426 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:45:41 np0005596060 nova_compute[247421]: 2026-01-26 18:45:41.432 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:45:41 np0005596060 nova_compute[247421]: 2026-01-26 18:45:41.432 247428 INFO nova.compute.claims [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:45:41 np0005596060 nova_compute[247421]: 2026-01-26 18:45:41.530 247428 DEBUG oslo_concurrency.processutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:45:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Jan 26 13:45:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Jan 26 13:45:41 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Jan 26 13:45:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:45:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2243381657' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:45:42 np0005596060 nova_compute[247421]: 2026-01-26 18:45:42.000 247428 DEBUG oslo_concurrency.processutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:45:42 np0005596060 nova_compute[247421]: 2026-01-26 18:45:42.006 247428 DEBUG nova.compute.provider_tree [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:45:42 np0005596060 nova_compute[247421]: 2026-01-26 18:45:42.089 247428 DEBUG nova.scheduler.client.report [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:45:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:42.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:42 np0005596060 nova_compute[247421]: 2026-01-26 18:45:42.306 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:45:42 np0005596060 nova_compute[247421]: 2026-01-26 18:45:42.308 247428 DEBUG nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:45:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:42.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:42 np0005596060 nova_compute[247421]: 2026-01-26 18:45:42.770 247428 DEBUG nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:45:42 np0005596060 nova_compute[247421]: 2026-01-26 18:45:42.771 247428 DEBUG nova.network.neutron [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:45:42 np0005596060 nova_compute[247421]: 2026-01-26 18:45:42.805 247428 INFO nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:45:42 np0005596060 nova_compute[247421]: 2026-01-26 18:45:42.918 247428 DEBUG nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:45:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 200 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 125 op/s
Jan 26 13:45:42 np0005596060 nova_compute[247421]: 2026-01-26 18:45:42.960 247428 DEBUG nova.policy [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ab4f5e4c36dd409fa5bb8295edb56a1e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f6d1f7624fe846da936bdf952d988dca', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.250 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.258 247428 DEBUG nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.260 247428 DEBUG nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.261 247428 INFO nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Creating image(s)#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.307 247428 DEBUG nova.storage.rbd_utils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] rbd image 8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.351 247428 DEBUG nova.storage.rbd_utils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] rbd image 8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.381 247428 DEBUG nova.storage.rbd_utils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] rbd image 8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.386 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Acquiring lock "09faf681a7a442b6de76b304e7138d9140c11b33" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.387 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "09faf681a7a442b6de76b304e7138d9140c11b33" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.659 247428 DEBUG nova.virt.libvirt.imagebackend [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Image locations are: [{'url': 'rbd://d4cd1917-5876-51b6-bc64-65a16199754d/images/78a38f51-2188-4186-ba53-2edab9be0ff2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://d4cd1917-5876-51b6-bc64-65a16199754d/images/78a38f51-2188-4186-ba53-2edab9be0ff2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.720 247428 DEBUG nova.virt.libvirt.imagebackend [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Selected location: {'url': 'rbd://d4cd1917-5876-51b6-bc64-65a16199754d/images/78a38f51-2188-4186-ba53-2edab9be0ff2/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.721 247428 DEBUG nova.storage.rbd_utils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] cloning images/78a38f51-2188-4186-ba53-2edab9be0ff2@snap to None/8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 26 13:45:43 np0005596060 nova_compute[247421]: 2026-01-26 18:45:43.987 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:43 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:43.988 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:45:43 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:43.988 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:45:43 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:43.989 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:45:44 np0005596060 nova_compute[247421]: 2026-01-26 18:45:44.020 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "09faf681a7a442b6de76b304e7138d9140c11b33" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:45:44 np0005596060 nova_compute[247421]: 2026-01-26 18:45:44.123 247428 DEBUG nova.network.neutron [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Successfully created port: d41e8380-4816-45cd-bcca-7871397467e5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:45:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:44.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:45:44
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['volumes', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'vms', 'backups']
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:45:44 np0005596060 nova_compute[247421]: 2026-01-26 18:45:44.181 247428 DEBUG nova.objects.instance [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lazy-loading 'migration_context' on Instance uuid 8392b231-e975-4b6c-b6e8-e2a5101c59fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:45:44 np0005596060 nova_compute[247421]: 2026-01-26 18:45:44.199 247428 DEBUG nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:45:44 np0005596060 nova_compute[247421]: 2026-01-26 18:45:44.199 247428 DEBUG nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Ensure instance console log exists: /var/lib/nova/instances/8392b231-e975-4b6c-b6e8-e2a5101c59fa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:45:44 np0005596060 nova_compute[247421]: 2026-01-26 18:45:44.200 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:45:44 np0005596060 nova_compute[247421]: 2026-01-26 18:45:44.200 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:45:44 np0005596060 nova_compute[247421]: 2026-01-26 18:45:44.200 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:45:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:44.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 200 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.1 MiB/s wr, 108 op/s
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:45:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:45:45 np0005596060 nova_compute[247421]: 2026-01-26 18:45:45.190 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:46 np0005596060 nova_compute[247421]: 2026-01-26 18:45:46.078 247428 DEBUG nova.network.neutron [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Successfully updated port: d41e8380-4816-45cd-bcca-7871397467e5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:45:46 np0005596060 nova_compute[247421]: 2026-01-26 18:45:46.098 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Acquiring lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:45:46 np0005596060 nova_compute[247421]: 2026-01-26 18:45:46.099 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Acquired lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:45:46 np0005596060 nova_compute[247421]: 2026-01-26 18:45:46.099 247428 DEBUG nova.network.neutron [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:45:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:45:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:46.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:45:46 np0005596060 nova_compute[247421]: 2026-01-26 18:45:46.176 247428 DEBUG nova.compute.manager [req-310effa3-dd57-42f2-a479-8511a3d2b6fa req-f6857308-a567-4eae-854e-5dcb525db37c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received event network-changed-d41e8380-4816-45cd-bcca-7871397467e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:45:46 np0005596060 nova_compute[247421]: 2026-01-26 18:45:46.176 247428 DEBUG nova.compute.manager [req-310effa3-dd57-42f2-a479-8511a3d2b6fa req-f6857308-a567-4eae-854e-5dcb525db37c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Refreshing instance network info cache due to event network-changed-d41e8380-4816-45cd-bcca-7871397467e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:45:46 np0005596060 nova_compute[247421]: 2026-01-26 18:45:46.176 247428 DEBUG oslo_concurrency.lockutils [req-310effa3-dd57-42f2-a479-8511a3d2b6fa req-f6857308-a567-4eae-854e-5dcb525db37c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:45:46 np0005596060 nova_compute[247421]: 2026-01-26 18:45:46.238 247428 DEBUG nova.network.neutron [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:45:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:46.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 200 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.4 MiB/s wr, 59 op/s
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.212 247428 DEBUG nova.network.neutron [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Updating instance_info_cache with network_info: [{"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.236 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Releasing lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.236 247428 DEBUG nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Instance network_info: |[{"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.237 247428 DEBUG oslo_concurrency.lockutils [req-310effa3-dd57-42f2-a479-8511a3d2b6fa req-f6857308-a567-4eae-854e-5dcb525db37c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.237 247428 DEBUG nova.network.neutron [req-310effa3-dd57-42f2-a479-8511a3d2b6fa req-f6857308-a567-4eae-854e-5dcb525db37c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Refreshing network info cache for port d41e8380-4816-45cd-bcca-7871397467e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.239 247428 DEBUG nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Start _get_guest_xml network_info=[{"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T18:45:30Z,direct_url=<?>,disk_format='raw',id=78a38f51-2188-4186-ba53-2edab9be0ff2,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-2116933427',owner='f6d1f7624fe846da936bdf952d988dca',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T18:45:37Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '78a38f51-2188-4186-ba53-2edab9be0ff2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.243 247428 WARNING nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.247 247428 DEBUG nova.virt.libvirt.host [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.248 247428 DEBUG nova.virt.libvirt.host [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.254 247428 DEBUG nova.virt.libvirt.host [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.254 247428 DEBUG nova.virt.libvirt.host [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.255 247428 DEBUG nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.256 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T18:45:30Z,direct_url=<?>,disk_format='raw',id=78a38f51-2188-4186-ba53-2edab9be0ff2,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-2116933427',owner='f6d1f7624fe846da936bdf952d988dca',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T18:45:37Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.256 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.256 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.257 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.257 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.257 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.257 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.258 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.258 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.258 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.258 247428 DEBUG nova.virt.hardware [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.261 247428 DEBUG oslo_concurrency.processutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:45:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:45:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/883154550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.689 247428 DEBUG oslo_concurrency.processutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.718 247428 DEBUG nova.storage.rbd_utils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] rbd image 8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:45:47 np0005596060 nova_compute[247421]: 2026-01-26 18:45:47.723 247428 DEBUG oslo_concurrency.processutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:45:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:48.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.368 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:48.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:45:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1504050741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.542 247428 DEBUG oslo_concurrency.processutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.544 247428 DEBUG nova.virt.libvirt.vif [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1945150663',display_name='tempest-TestSnapshotPattern-server-1945150663',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1945150663',id=30,image_ref='78a38f51-2188-4186-ba53-2edab9be0ff2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCAYmVs+UW2XJsRtBIbdZbz28ZVdt7AiOxfdjjSsjnkL6p6XTA2fhA867rw0hqdCm+lPM0yPV4ff9dVLHk7OAzo0CgTYKG/4Lv9EiKZeI+OUhOQtFQJysHTnBrgkAFHfCQ==',key_name='tempest-TestSnapshotPattern-1728523139',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6d1f7624fe846da936bdf952d988dca',ramdisk_id='',reservation_id='r-bw8lz6zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='0da4d154-1c5d-435f-bc88-07c4b9e6f79b',image_min_disk='1',image_min_ram='0',image_owner_id='f6d1f7624fe846da936bdf952d988dca',image_owner_project_name='tempest-TestSnapshotPattern-612206442',image_owner_user_name='tempest-TestSnapshotPattern-612206442-project-member',image_user_id='ab4f5e4c36dd409fa5bb8295edb56a1e',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-612206442',owner_user_name='tempest-TestSnapshotPattern-612206442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:45:43Z,user_data=None,user_id='ab4f5e4c36dd409fa5bb8295edb56a1e',uuid=8392b231-e975-4b6c-b6e8-e2a5101c59fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.544 247428 DEBUG nova.network.os_vif_util [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Converting VIF {"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.545 247428 DEBUG nova.network.os_vif_util [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:1c:1b,bridge_name='br-int',has_traffic_filtering=True,id=d41e8380-4816-45cd-bcca-7871397467e5,network=Network(3c92bd0c-b67a-4232-823a-830d97d73785),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41e8380-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.546 247428 DEBUG nova.objects.instance [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lazy-loading 'pci_devices' on Instance uuid 8392b231-e975-4b6c-b6e8-e2a5101c59fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.562 247428 DEBUG nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <uuid>8392b231-e975-4b6c-b6e8-e2a5101c59fa</uuid>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <name>instance-0000001e</name>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestSnapshotPattern-server-1945150663</nova:name>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:45:47</nova:creationTime>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <nova:user uuid="ab4f5e4c36dd409fa5bb8295edb56a1e">tempest-TestSnapshotPattern-612206442-project-member</nova:user>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <nova:project uuid="f6d1f7624fe846da936bdf952d988dca">tempest-TestSnapshotPattern-612206442</nova:project>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="78a38f51-2188-4186-ba53-2edab9be0ff2"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <nova:port uuid="d41e8380-4816-45cd-bcca-7871397467e5">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <entry name="serial">8392b231-e975-4b6c-b6e8-e2a5101c59fa</entry>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <entry name="uuid">8392b231-e975-4b6c-b6e8-e2a5101c59fa</entry>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk.config">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:bc:1c:1b"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <target dev="tapd41e8380-48"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/8392b231-e975-4b6c-b6e8-e2a5101c59fa/console.log" append="off"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <input type="keyboard" bus="usb"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:45:48 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:45:48 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:45:48 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:45:48 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.564 247428 DEBUG nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Preparing to wait for external event network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.564 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Acquiring lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.564 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.565 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.565 247428 DEBUG nova.virt.libvirt.vif [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1945150663',display_name='tempest-TestSnapshotPattern-server-1945150663',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1945150663',id=30,image_ref='78a38f51-2188-4186-ba53-2edab9be0ff2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCAYmVs+UW2XJsRtBIbdZbz28ZVdt7AiOxfdjjSsjnkL6p6XTA2fhA867rw0hqdCm+lPM0yPV4ff9dVLHk7OAzo0CgTYKG/4Lv9EiKZeI+OUhOQtFQJysHTnBrgkAFHfCQ==',key_name='tempest-TestSnapshotPattern-1728523139',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f6d1f7624fe846da936bdf952d988dca',ramdisk_id='',reservation_id='r-bw8lz6zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='0da4d154-1c5d-435f-bc88-07c4b9e6f79b',image_min_disk='1',image_min_ram='0',image_owner_id='f6d1f7624fe846da936bdf952d988dca',image_owner_project_name='tempest-TestSnapshotPattern-612206442',image_owner_user_name='tempest-TestSnapshotPattern-612206442-project-member',image_user_id='ab4f5e4c36dd409fa5bb8295edb56a1e',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-612206442',owner_user_name='tempest-TestSnapshotPattern-612206442-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:45:43Z,user_data=None,user_id='ab4f5e4c36dd409fa5bb8295edb56a1e',uuid=8392b231-e975-4b6c-b6e8-e2a5101c59fa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.566 247428 DEBUG nova.network.os_vif_util [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Converting VIF {"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.566 247428 DEBUG nova.network.os_vif_util [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:1c:1b,bridge_name='br-int',has_traffic_filtering=True,id=d41e8380-4816-45cd-bcca-7871397467e5,network=Network(3c92bd0c-b67a-4232-823a-830d97d73785),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41e8380-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.567 247428 DEBUG os_vif [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:1c:1b,bridge_name='br-int',has_traffic_filtering=True,id=d41e8380-4816-45cd-bcca-7871397467e5,network=Network(3c92bd0c-b67a-4232-823a-830d97d73785),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41e8380-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.567 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.568 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.568 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.571 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.572 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd41e8380-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.572 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd41e8380-48, col_values=(('external_ids', {'iface-id': 'd41e8380-4816-45cd-bcca-7871397467e5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:1c:1b', 'vm-uuid': '8392b231-e975-4b6c-b6e8-e2a5101c59fa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.574 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:48 np0005596060 NetworkManager[48900]: <info>  [1769453148.5751] manager: (tapd41e8380-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.577 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.582 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.583 247428 INFO os_vif [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:1c:1b,bridge_name='br-int',has_traffic_filtering=True,id=d41e8380-4816-45cd-bcca-7871397467e5,network=Network(3c92bd0c-b67a-4232-823a-830d97d73785),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41e8380-48')#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.625 247428 DEBUG nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.626 247428 DEBUG nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.626 247428 DEBUG nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] No VIF found with MAC fa:16:3e:bc:1c:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.627 247428 INFO nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Using config drive#033[00m
Jan 26 13:45:48 np0005596060 nova_compute[247421]: 2026-01-26 18:45:48.652 247428 DEBUG nova.storage.rbd_utils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] rbd image 8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:45:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 200 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 1.1 KiB/s wr, 36 op/s
Jan 26 13:45:49 np0005596060 nova_compute[247421]: 2026-01-26 18:45:49.595 247428 INFO nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Creating config drive at /var/lib/nova/instances/8392b231-e975-4b6c-b6e8-e2a5101c59fa/disk.config#033[00m
Jan 26 13:45:49 np0005596060 nova_compute[247421]: 2026-01-26 18:45:49.600 247428 DEBUG oslo_concurrency.processutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8392b231-e975-4b6c-b6e8-e2a5101c59fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd1l_ov4z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:45:49 np0005596060 nova_compute[247421]: 2026-01-26 18:45:49.642 247428 DEBUG nova.network.neutron [req-310effa3-dd57-42f2-a479-8511a3d2b6fa req-f6857308-a567-4eae-854e-5dcb525db37c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Updated VIF entry in instance network info cache for port d41e8380-4816-45cd-bcca-7871397467e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:45:49 np0005596060 nova_compute[247421]: 2026-01-26 18:45:49.643 247428 DEBUG nova.network.neutron [req-310effa3-dd57-42f2-a479-8511a3d2b6fa req-f6857308-a567-4eae-854e-5dcb525db37c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Updating instance_info_cache with network_info: [{"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:45:49 np0005596060 nova_compute[247421]: 2026-01-26 18:45:49.663 247428 DEBUG oslo_concurrency.lockutils [req-310effa3-dd57-42f2-a479-8511a3d2b6fa req-f6857308-a567-4eae-854e-5dcb525db37c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:45:49 np0005596060 nova_compute[247421]: 2026-01-26 18:45:49.738 247428 DEBUG oslo_concurrency.processutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8392b231-e975-4b6c-b6e8-e2a5101c59fa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd1l_ov4z" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:45:49 np0005596060 nova_compute[247421]: 2026-01-26 18:45:49.770 247428 DEBUG nova.storage.rbd_utils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] rbd image 8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:45:49 np0005596060 nova_compute[247421]: 2026-01-26 18:45:49.774 247428 DEBUG oslo_concurrency.processutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8392b231-e975-4b6c-b6e8-e2a5101c59fa/disk.config 8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:45:49 np0005596060 nova_compute[247421]: 2026-01-26 18:45:49.961 247428 DEBUG oslo_concurrency.processutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8392b231-e975-4b6c-b6e8-e2a5101c59fa/disk.config 8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:45:49 np0005596060 nova_compute[247421]: 2026-01-26 18:45:49.962 247428 INFO nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Deleting local config drive /var/lib/nova/instances/8392b231-e975-4b6c-b6e8-e2a5101c59fa/disk.config because it was imported into RBD.#033[00m
Jan 26 13:45:50 np0005596060 kernel: tapd41e8380-48: entered promiscuous mode
Jan 26 13:45:50 np0005596060 NetworkManager[48900]: <info>  [1769453150.0071] manager: (tapd41e8380-48): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.007 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 ovn_controller[148842]: 2026-01-26T18:45:50Z|00196|binding|INFO|Claiming lport d41e8380-4816-45cd-bcca-7871397467e5 for this chassis.
Jan 26 13:45:50 np0005596060 ovn_controller[148842]: 2026-01-26T18:45:50Z|00197|binding|INFO|d41e8380-4816-45cd-bcca-7871397467e5: Claiming fa:16:3e:bc:1c:1b 10.100.0.12
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.011 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.015 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.019 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 NetworkManager[48900]: <info>  [1769453150.0220] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Jan 26 13:45:50 np0005596060 NetworkManager[48900]: <info>  [1769453150.0232] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.021 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.026 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:1c:1b 10.100.0.12'], port_security=['fa:16:3e:bc:1c:1b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8392b231-e975-4b6c-b6e8-e2a5101c59fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c92bd0c-b67a-4232-823a-830d97d73785', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6d1f7624fe846da936bdf952d988dca', 'neutron:revision_number': '2', 'neutron:security_group_ids': '24e47fcc-5b62-4556-b880-35104e4b6ec2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4ce7d98-bbfb-4f37-af96-1528ef95ee96, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=d41e8380-4816-45cd-bcca-7871397467e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.027 159331 INFO neutron.agent.ovn.metadata.agent [-] Port d41e8380-4816-45cd-bcca-7871397467e5 in datapath 3c92bd0c-b67a-4232-823a-830d97d73785 bound to our chassis#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.028 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3c92bd0c-b67a-4232-823a-830d97d73785#033[00m
Jan 26 13:45:50 np0005596060 systemd-udevd[308154]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.040 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[95c5165e-cb8c-4296-b39d-bdc16bfc96d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.041 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3c92bd0c-b1 in ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:45:50 np0005596060 systemd-machined[213879]: New machine qemu-17-instance-0000001e.
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.043 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3c92bd0c-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.043 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[74d365fa-40fb-4980-856d-977f0010da47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.044 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[721ae5f7-3e66-40e2-87cf-7ff9d3bf6ce9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 NetworkManager[48900]: <info>  [1769453150.0528] device (tapd41e8380-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:45:50 np0005596060 NetworkManager[48900]: <info>  [1769453150.0537] device (tapd41e8380-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.057 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[582a5eb3-0527-4a35-ae36-a639d69ad27e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 systemd[1]: Started Virtual Machine qemu-17-instance-0000001e.
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.083 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[aab822f4-60de-4c47-aa2b-c9a36955abb6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.129 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[b10720ee-bed5-4bef-819d-deeb4d775958]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 NetworkManager[48900]: <info>  [1769453150.1392] manager: (tap3c92bd0c-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/106)
Jan 26 13:45:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.138 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c26fa22e-f9c4-4ed7-af33-04e79ce83450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:50.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.141 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.156 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 ovn_controller[148842]: 2026-01-26T18:45:50Z|00198|binding|INFO|Setting lport d41e8380-4816-45cd-bcca-7871397467e5 ovn-installed in OVS
Jan 26 13:45:50 np0005596060 ovn_controller[148842]: 2026-01-26T18:45:50Z|00199|binding|INFO|Setting lport d41e8380-4816-45cd-bcca-7871397467e5 up in Southbound
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.168 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.171 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[fddd6e5d-9c3d-4bdb-b515-204538e5f76b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.174 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[24cb7e85-e89c-4b5d-afce-cf4f8857e174]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 NetworkManager[48900]: <info>  [1769453150.1963] device (tap3c92bd0c-b0): carrier: link connected
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.203 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[e63b57d7-d403-4666-a039-673562b48cb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.221 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[e43861a6-edad-45f0-a462-02b2411c1b5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c92bd0c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:36:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691477, 'reachable_time': 37009, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308187, 'error': None, 'target': 'ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.238 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b34e24-da88-4553-8ceb-8dfae0d38730]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:3654'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 691477, 'tstamp': 691477}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308188, 'error': None, 'target': 'ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.255 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[3e47ae11-95a6-4cf1-9934-002ef4f817ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3c92bd0c-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:36:54'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691477, 'reachable_time': 37009, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308189, 'error': None, 'target': 'ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.289 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[eb01cb5c-a833-442a-b3d8-332b2761989a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.356 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ee083e4f-a4e6-4a78-944b-5c58dbe11311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.358 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c92bd0c-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.358 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.359 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c92bd0c-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.360 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 kernel: tap3c92bd0c-b0: entered promiscuous mode
Jan 26 13:45:50 np0005596060 NetworkManager[48900]: <info>  [1769453150.3612] manager: (tap3c92bd0c-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.363 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3c92bd0c-b0, col_values=(('external_ids', {'iface-id': '694ebde7-9ee4-4b59-afb6-8479ba63b2ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:45:50 np0005596060 ovn_controller[148842]: 2026-01-26T18:45:50Z|00200|binding|INFO|Releasing lport 694ebde7-9ee4-4b59-afb6-8479ba63b2ad from this chassis (sb_readonly=0)
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.364 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.377 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.378 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3c92bd0c-b67a-4232-823a-830d97d73785.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3c92bd0c-b67a-4232-823a-830d97d73785.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.379 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[20dbff73-59ad-43a1-bc7d-b9b6d15c34a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.379 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-3c92bd0c-b67a-4232-823a-830d97d73785
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/3c92bd0c-b67a-4232-823a-830d97d73785.pid.haproxy
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 3c92bd0c-b67a-4232-823a-830d97d73785
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:45:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:45:50.380 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785', 'env', 'PROCESS_TAG=haproxy-3c92bd0c-b67a-4232-823a-830d97d73785', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3c92bd0c-b67a-4232-823a-830d97d73785.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:45:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:50.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.523 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769453150.5229714, 8392b231-e975-4b6c-b6e8-e2a5101c59fa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.523 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] VM Started (Lifecycle Event)#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.569 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.573 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769453150.523811, 8392b231-e975-4b6c-b6e8-e2a5101c59fa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.574 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.619 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.622 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.654 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.733 247428 DEBUG nova.compute.manager [req-72b67b36-60bc-4f06-9eb0-5930266eac1d req-7f7244d2-05a3-4f8d-bb6c-a08d4bb65d69 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received event network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.734 247428 DEBUG oslo_concurrency.lockutils [req-72b67b36-60bc-4f06-9eb0-5930266eac1d req-7f7244d2-05a3-4f8d-bb6c-a08d4bb65d69 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.734 247428 DEBUG oslo_concurrency.lockutils [req-72b67b36-60bc-4f06-9eb0-5930266eac1d req-7f7244d2-05a3-4f8d-bb6c-a08d4bb65d69 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.734 247428 DEBUG oslo_concurrency.lockutils [req-72b67b36-60bc-4f06-9eb0-5930266eac1d req-7f7244d2-05a3-4f8d-bb6c-a08d4bb65d69 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.734 247428 DEBUG nova.compute.manager [req-72b67b36-60bc-4f06-9eb0-5930266eac1d req-7f7244d2-05a3-4f8d-bb6c-a08d4bb65d69 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Processing event network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.735 247428 DEBUG nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.738 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769453150.7381628, 8392b231-e975-4b6c-b6e8-e2a5101c59fa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.738 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.739 247428 DEBUG nova.virt.libvirt.driver [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.743 247428 INFO nova.virt.libvirt.driver [-] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Instance spawned successfully.#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.743 247428 INFO nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Took 7.48 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.744 247428 DEBUG nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:45:50 np0005596060 podman[308264]: 2026-01-26 18:45:50.757823273 +0000 UTC m=+0.055052949 container create 76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.767 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.771 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:45:50 np0005596060 systemd[1]: Started libpod-conmon-76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3.scope.
Jan 26 13:45:50 np0005596060 podman[308264]: 2026-01-26 18:45:50.724734945 +0000 UTC m=+0.021964651 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.823 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:45:50 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:45:50 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96aad4547eb46c7e9c033fb3a85214f060c6d41009262efa3deaf8790f53b28f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:45:50 np0005596060 podman[308264]: 2026-01-26 18:45:50.847267651 +0000 UTC m=+0.144497327 container init 76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:45:50 np0005596060 podman[308264]: 2026-01-26 18:45:50.85282089 +0000 UTC m=+0.150050566 container start 76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.865 247428 INFO nova.compute.manager [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Took 9.47 seconds to build instance.#033[00m
Jan 26 13:45:50 np0005596060 neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785[308279]: [NOTICE]   (308283) : New worker (308285) forked
Jan 26 13:45:50 np0005596060 neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785[308279]: [NOTICE]   (308283) : Loading success.
Jan 26 13:45:50 np0005596060 nova_compute[247421]: 2026-01-26 18:45:50.883 247428 DEBUG oslo_concurrency.lockutils [None req-a301b0d7-ba56-40c9-8763-e5409e99c245 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:45:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 200 MiB data, 465 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 1.1 KiB/s wr, 36 op/s
Jan 26 13:45:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:52.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:52.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:52 np0005596060 nova_compute[247421]: 2026-01-26 18:45:52.849 247428 DEBUG nova.compute.manager [req-7a62f5a4-acc6-4331-9a0d-ce860ce7a167 req-3b856dcf-95df-46b7-8c25-e626c7d8a2c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received event network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:45:52 np0005596060 nova_compute[247421]: 2026-01-26 18:45:52.849 247428 DEBUG oslo_concurrency.lockutils [req-7a62f5a4-acc6-4331-9a0d-ce860ce7a167 req-3b856dcf-95df-46b7-8c25-e626c7d8a2c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:45:52 np0005596060 nova_compute[247421]: 2026-01-26 18:45:52.849 247428 DEBUG oslo_concurrency.lockutils [req-7a62f5a4-acc6-4331-9a0d-ce860ce7a167 req-3b856dcf-95df-46b7-8c25-e626c7d8a2c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:45:52 np0005596060 nova_compute[247421]: 2026-01-26 18:45:52.850 247428 DEBUG oslo_concurrency.lockutils [req-7a62f5a4-acc6-4331-9a0d-ce860ce7a167 req-3b856dcf-95df-46b7-8c25-e626c7d8a2c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:45:52 np0005596060 nova_compute[247421]: 2026-01-26 18:45:52.850 247428 DEBUG nova.compute.manager [req-7a62f5a4-acc6-4331-9a0d-ce860ce7a167 req-3b856dcf-95df-46b7-8c25-e626c7d8a2c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] No waiting events found dispatching network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:45:52 np0005596060 nova_compute[247421]: 2026-01-26 18:45:52.850 247428 WARNING nova.compute.manager [req-7a62f5a4-acc6-4331-9a0d-ce860ce7a167 req-3b856dcf-95df-46b7-8c25-e626c7d8a2c0 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received unexpected event network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 for instance with vm_state active and task_state None.#033[00m
Jan 26 13:45:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 200 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 16 KiB/s wr, 92 op/s
Jan 26 13:45:53 np0005596060 nova_compute[247421]: 2026-01-26 18:45:53.371 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:53 np0005596060 nova_compute[247421]: 2026-01-26 18:45:53.574 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:54.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:54.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 200 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 86 op/s
Jan 26 13:45:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:56.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:56.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:56 np0005596060 nova_compute[247421]: 2026-01-26 18:45:56.660 247428 DEBUG nova.compute.manager [req-9eae8cc2-388e-459e-99f5-7603f5fe36e1 req-9f3a410c-7d47-4f6a-98e2-46f15dbe3cfd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received event network-changed-d41e8380-4816-45cd-bcca-7871397467e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:45:56 np0005596060 nova_compute[247421]: 2026-01-26 18:45:56.661 247428 DEBUG nova.compute.manager [req-9eae8cc2-388e-459e-99f5-7603f5fe36e1 req-9f3a410c-7d47-4f6a-98e2-46f15dbe3cfd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Refreshing instance network info cache due to event network-changed-d41e8380-4816-45cd-bcca-7871397467e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:45:56 np0005596060 nova_compute[247421]: 2026-01-26 18:45:56.661 247428 DEBUG oslo_concurrency.lockutils [req-9eae8cc2-388e-459e-99f5-7603f5fe36e1 req-9f3a410c-7d47-4f6a-98e2-46f15dbe3cfd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:45:56 np0005596060 nova_compute[247421]: 2026-01-26 18:45:56.661 247428 DEBUG oslo_concurrency.lockutils [req-9eae8cc2-388e-459e-99f5-7603f5fe36e1 req-9f3a410c-7d47-4f6a-98e2-46f15dbe3cfd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:45:56 np0005596060 nova_compute[247421]: 2026-01-26 18:45:56.662 247428 DEBUG nova.network.neutron [req-9eae8cc2-388e-459e-99f5-7603f5fe36e1 req-9f3a410c-7d47-4f6a-98e2-46f15dbe3cfd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Refreshing network info cache for port d41e8380-4816-45cd-bcca-7871397467e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:45:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:45:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 200 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 104 op/s
Jan 26 13:45:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:45:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:45:58.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:45:58 np0005596060 nova_compute[247421]: 2026-01-26 18:45:58.186 247428 DEBUG nova.network.neutron [req-9eae8cc2-388e-459e-99f5-7603f5fe36e1 req-9f3a410c-7d47-4f6a-98e2-46f15dbe3cfd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Updated VIF entry in instance network info cache for port d41e8380-4816-45cd-bcca-7871397467e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:45:58 np0005596060 nova_compute[247421]: 2026-01-26 18:45:58.187 247428 DEBUG nova.network.neutron [req-9eae8cc2-388e-459e-99f5-7603f5fe36e1 req-9f3a410c-7d47-4f6a-98e2-46f15dbe3cfd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Updating instance_info_cache with network_info: [{"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:45:58 np0005596060 nova_compute[247421]: 2026-01-26 18:45:58.207 247428 DEBUG oslo_concurrency.lockutils [req-9eae8cc2-388e-459e-99f5-7603f5fe36e1 req-9f3a410c-7d47-4f6a-98e2-46f15dbe3cfd 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:45:58 np0005596060 nova_compute[247421]: 2026-01-26 18:45:58.373 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:45:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:45:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:45:58.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:45:58 np0005596060 nova_compute[247421]: 2026-01-26 18:45:58.576 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:45:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 200 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 88 op/s
Jan 26 13:46:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:00.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:00.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 200 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 77 op/s
Jan 26 13:46:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:02.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:02.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 200 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 77 op/s
Jan 26 13:46:03 np0005596060 nova_compute[247421]: 2026-01-26 18:46:03.375 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:03 np0005596060 nova_compute[247421]: 2026-01-26 18:46:03.578 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002181407616787642 of space, bias 1.0, pg target 0.6544222850362926 quantized to 32 (current 32)
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004069646554531919 of space, bias 1.0, pg target 1.2208939663595757 quantized to 32 (current 32)
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 26 13:46:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.003000075s ======
Jan 26 13:46:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:04.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000075s
Jan 26 13:46:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:04.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 200 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 567 KiB/s rd, 18 op/s
Jan 26 13:46:05 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:05Z|00025|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.12
Jan 26 13:46:05 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:05Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:bc:1c:1b 10.100.0.12
Jan 26 13:46:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:46:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:06.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:46:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:06.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 206 MiB data, 469 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 285 KiB/s wr, 50 op/s
Jan 26 13:46:07 np0005596060 podman[308353]: 2026-01-26 18:46:07.791788069 +0000 UTC m=+0.049579562 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 13:46:07 np0005596060 podman[308354]: 2026-01-26 18:46:07.841415801 +0000 UTC m=+0.096461805 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 26 13:46:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:08.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:08 np0005596060 nova_compute[247421]: 2026-01-26 18:46:08.379 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:08.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:08 np0005596060 nova_compute[247421]: 2026-01-26 18:46:08.580 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 214 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 52 op/s
Jan 26 13:46:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:09Z|00027|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.12
Jan 26 13:46:09 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:09Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:bc:1c:1b 10.100.0.12
Jan 26 13:46:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:10.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:10 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:10Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bc:1c:1b 10.100.0.12
Jan 26 13:46:10 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:10Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bc:1c:1b 10.100.0.12
Jan 26 13:46:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 214 MiB data, 471 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 499 KiB/s wr, 52 op/s
Jan 26 13:46:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:12.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:12.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 217 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 545 KiB/s wr, 54 op/s
Jan 26 13:46:13 np0005596060 nova_compute[247421]: 2026-01-26 18:46:13.381 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:13 np0005596060 nova_compute[247421]: 2026-01-26 18:46:13.582 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:46:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:46:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:14.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:14.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:14.773 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:46:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:14.775 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:46:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:14.776 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:46:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 217 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 545 KiB/s wr, 54 op/s
Jan 26 13:46:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:16.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:46:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:16.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:46:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 217 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 545 KiB/s wr, 54 op/s
Jan 26 13:46:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:18.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:18 np0005596060 nova_compute[247421]: 2026-01-26 18:46:18.384 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:18.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:18 np0005596060 nova_compute[247421]: 2026-01-26 18:46:18.583 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 217 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 226 KiB/s rd, 264 KiB/s wr, 22 op/s
Jan 26 13:46:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:46:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:20.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:46:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:20.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 217 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 50 KiB/s wr, 2 op/s
Jan 26 13:46:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:22.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:22.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:22 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 217 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 53 KiB/s wr, 3 op/s
Jan 26 13:46:23 np0005596060 nova_compute[247421]: 2026-01-26 18:46:23.421 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:23 np0005596060 nova_compute[247421]: 2026-01-26 18:46:23.586 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:23 np0005596060 nova_compute[247421]: 2026-01-26 18:46:23.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:46:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:24.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:24.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:24 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 217 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 0 op/s
Jan 26 13:46:25 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:25Z|00201|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Jan 26 13:46:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:26.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:26.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:26 np0005596060 nova_compute[247421]: 2026-01-26 18:46:26.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:46:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:26 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 217 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 0 op/s
Jan 26 13:46:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:28.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:28 np0005596060 nova_compute[247421]: 2026-01-26 18:46:28.462 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:46:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:28.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:46:28 np0005596060 nova_compute[247421]: 2026-01-26 18:46:28.587 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:28 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 217 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 6.7 KiB/s wr, 0 op/s
Jan 26 13:46:29 np0005596060 nova_compute[247421]: 2026-01-26 18:46:29.631 247428 DEBUG nova.compute.manager [None req-4eb488e4-d721-4ddf-b704-2434024343b3 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:46:29 np0005596060 nova_compute[247421]: 2026-01-26 18:46:29.694 247428 INFO nova.compute.manager [None req-4eb488e4-d721-4ddf-b704-2434024343b3 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] instance snapshotting#033[00m
Jan 26 13:46:29 np0005596060 nova_compute[247421]: 2026-01-26 18:46:29.970 247428 INFO nova.virt.libvirt.driver [None req-4eb488e4-d721-4ddf-b704-2434024343b3 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Beginning live snapshot process#033[00m
Jan 26 13:46:30 np0005596060 nova_compute[247421]: 2026-01-26 18:46:30.162 247428 DEBUG nova.storage.rbd_utils [None req-4eb488e4-d721-4ddf-b704-2434024343b3 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] creating snapshot(0292ae5ca2ad4302b7acf48e74870c4e) on rbd image(8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 26 13:46:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:30.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:30.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:30 np0005596060 nova_compute[247421]: 2026-01-26 18:46:30.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:46:30 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 217 MiB data, 472 MiB used, 21 GiB / 21 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Jan 26 13:46:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Jan 26 13:46:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Jan 26 13:46:31 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Jan 26 13:46:31 np0005596060 nova_compute[247421]: 2026-01-26 18:46:31.377 247428 DEBUG nova.storage.rbd_utils [None req-4eb488e4-d721-4ddf-b704-2434024343b3 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] cloning vms/8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk@0292ae5ca2ad4302b7acf48e74870c4e to images/e769eb72-7388-4813-bb73-4ef4180cf6e9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 26 13:46:31 np0005596060 nova_compute[247421]: 2026-01-26 18:46:31.553 247428 DEBUG nova.storage.rbd_utils [None req-4eb488e4-d721-4ddf-b704-2434024343b3 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] flattening images/e769eb72-7388-4813-bb73-4ef4180cf6e9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 26 13:46:31 np0005596060 nova_compute[247421]: 2026-01-26 18:46:31.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:46:31 np0005596060 nova_compute[247421]: 2026-01-26 18:46:31.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:46:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:32 np0005596060 nova_compute[247421]: 2026-01-26 18:46:32.122 247428 DEBUG nova.storage.rbd_utils [None req-4eb488e4-d721-4ddf-b704-2434024343b3 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] removing snapshot(0292ae5ca2ad4302b7acf48e74870c4e) on rbd image(8392b231-e975-4b6c-b6e8-e2a5101c59fa_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 26 13:46:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:32.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Jan 26 13:46:32 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Jan 26 13:46:32 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Jan 26 13:46:32 np0005596060 nova_compute[247421]: 2026-01-26 18:46:32.358 247428 DEBUG nova.storage.rbd_utils [None req-4eb488e4-d721-4ddf-b704-2434024343b3 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] creating snapshot(snap) on rbd image(e769eb72-7388-4813-bb73-4ef4180cf6e9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 26 13:46:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:32.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:32 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 279 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 4.1 MiB/s rd, 6.7 MiB/s wr, 108 op/s
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Jan 26 13:46:33 np0005596060 nova_compute[247421]: 2026-01-26 18:46:33.464 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:33 np0005596060 nova_compute[247421]: 2026-01-26 18:46:33.589 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:33 np0005596060 nova_compute[247421]: 2026-01-26 18:46:33.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:46:33 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 493df6c8-36b4-4917-bcfe-d8a9102973d6 does not exist
Jan 26 13:46:33 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d4b6d91b-945b-44b8-b4c7-051a27b94a1d does not exist
Jan 26 13:46:33 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 66495d3a-df64-43ea-8106-ccca4671ae11 does not exist
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:33.979561) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453193979609, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2018, "num_deletes": 252, "total_data_size": 3657798, "memory_usage": 3707104, "flush_reason": "Manual Compaction"}
Jan 26 13:46:33 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453194005945, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3581817, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46277, "largest_seqno": 48294, "table_properties": {"data_size": 3572633, "index_size": 5806, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18815, "raw_average_key_size": 20, "raw_value_size": 3554278, "raw_average_value_size": 3859, "num_data_blocks": 253, "num_entries": 921, "num_filter_entries": 921, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769452996, "oldest_key_time": 1769452996, "file_creation_time": 1769453193, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 26433 microseconds, and 7289 cpu microseconds.
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.005991) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3581817 bytes OK
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.006012) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.008039) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.008053) EVENT_LOG_v1 {"time_micros": 1769453194008049, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.008071) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3649600, prev total WAL file size 3649600, number of live WAL files 2.
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.009136) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3497KB)], [104(10MB)]
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453194009215, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 14230641, "oldest_snapshot_seqno": -1}
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7295 keys, 12139008 bytes, temperature: kUnknown
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453194085486, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 12139008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12090106, "index_size": 29590, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18245, "raw_key_size": 188577, "raw_average_key_size": 25, "raw_value_size": 11959115, "raw_average_value_size": 1639, "num_data_blocks": 1179, "num_entries": 7295, "num_filter_entries": 7295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769453194, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.085711) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 12139008 bytes
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.086752) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.4 rd, 159.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 10.2 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 7820, records dropped: 525 output_compression: NoCompression
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.086767) EVENT_LOG_v1 {"time_micros": 1769453194086760, "job": 62, "event": "compaction_finished", "compaction_time_micros": 76340, "compaction_time_cpu_micros": 37765, "output_level": 6, "num_output_files": 1, "total_output_size": 12139008, "num_input_records": 7820, "num_output_records": 7295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453194087384, "job": 62, "event": "table_file_deletion", "file_number": 106}
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453194089274, "job": 62, "event": "table_file_deletion", "file_number": 104}
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.009073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.089331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.089337) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.089338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.089340) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:46:34.089341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:46:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:34.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:46:34 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:46:34 np0005596060 podman[308875]: 2026-01-26 18:46:34.561050468 +0000 UTC m=+0.049137550 container create 83c88ec5cef33bbb4f13a95215924cdd96bbd4e4195a2ea5b33ba6a9aa4d102d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:46:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:34.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:34 np0005596060 systemd[1]: Started libpod-conmon-83c88ec5cef33bbb4f13a95215924cdd96bbd4e4195a2ea5b33ba6a9aa4d102d.scope.
Jan 26 13:46:34 np0005596060 podman[308875]: 2026-01-26 18:46:34.537280653 +0000 UTC m=+0.025367775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:46:34 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:46:34 np0005596060 nova_compute[247421]: 2026-01-26 18:46:34.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:46:34 np0005596060 nova_compute[247421]: 2026-01-26 18:46:34.653 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:46:34 np0005596060 nova_compute[247421]: 2026-01-26 18:46:34.653 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:46:34 np0005596060 nova_compute[247421]: 2026-01-26 18:46:34.671 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:46:34 np0005596060 podman[308875]: 2026-01-26 18:46:34.672362364 +0000 UTC m=+0.160449546 container init 83c88ec5cef33bbb4f13a95215924cdd96bbd4e4195a2ea5b33ba6a9aa4d102d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 13:46:34 np0005596060 nova_compute[247421]: 2026-01-26 18:46:34.672 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquired lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:46:34 np0005596060 nova_compute[247421]: 2026-01-26 18:46:34.672 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 26 13:46:34 np0005596060 nova_compute[247421]: 2026-01-26 18:46:34.672 247428 DEBUG nova.objects.instance [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8392b231-e975-4b6c-b6e8-e2a5101c59fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:46:34 np0005596060 podman[308875]: 2026-01-26 18:46:34.680474727 +0000 UTC m=+0.168561799 container start 83c88ec5cef33bbb4f13a95215924cdd96bbd4e4195a2ea5b33ba6a9aa4d102d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_napier, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:46:34 np0005596060 podman[308875]: 2026-01-26 18:46:34.684679742 +0000 UTC m=+0.172766854 container attach 83c88ec5cef33bbb4f13a95215924cdd96bbd4e4195a2ea5b33ba6a9aa4d102d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_napier, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 13:46:34 np0005596060 frosty_napier[308892]: 167 167
Jan 26 13:46:34 np0005596060 systemd[1]: libpod-83c88ec5cef33bbb4f13a95215924cdd96bbd4e4195a2ea5b33ba6a9aa4d102d.scope: Deactivated successfully.
Jan 26 13:46:34 np0005596060 podman[308897]: 2026-01-26 18:46:34.728081958 +0000 UTC m=+0.026372581 container died 83c88ec5cef33bbb4f13a95215924cdd96bbd4e4195a2ea5b33ba6a9aa4d102d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 13:46:34 np0005596060 systemd[1]: var-lib-containers-storage-overlay-263becb7d36d8d0404500b893928988da2773e004855638b77961f5f927fd248-merged.mount: Deactivated successfully.
Jan 26 13:46:34 np0005596060 podman[308897]: 2026-01-26 18:46:34.766236293 +0000 UTC m=+0.064526916 container remove 83c88ec5cef33bbb4f13a95215924cdd96bbd4e4195a2ea5b33ba6a9aa4d102d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_napier, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 13:46:34 np0005596060 systemd[1]: libpod-conmon-83c88ec5cef33bbb4f13a95215924cdd96bbd4e4195a2ea5b33ba6a9aa4d102d.scope: Deactivated successfully.
Jan 26 13:46:34 np0005596060 podman[308918]: 2026-01-26 18:46:34.944251077 +0000 UTC m=+0.042247138 container create ef8d09085df73d997202615747c9c8b52d7af7be802ed5947c2e523c9c9ed79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:46:34 np0005596060 nova_compute[247421]: 2026-01-26 18:46:34.944 247428 INFO nova.virt.libvirt.driver [None req-4eb488e4-d721-4ddf-b704-2434024343b3 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Snapshot image upload complete#033[00m
Jan 26 13:46:34 np0005596060 nova_compute[247421]: 2026-01-26 18:46:34.945 247428 INFO nova.compute.manager [None req-4eb488e4-d721-4ddf-b704-2434024343b3 ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Took 5.25 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 26 13:46:34 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 279 MiB data, 480 MiB used, 21 GiB / 21 GiB avail; 5.5 MiB/s rd, 8.9 MiB/s wr, 144 op/s
Jan 26 13:46:34 np0005596060 systemd[1]: Started libpod-conmon-ef8d09085df73d997202615747c9c8b52d7af7be802ed5947c2e523c9c9ed79a.scope.
Jan 26 13:46:35 np0005596060 podman[308918]: 2026-01-26 18:46:34.925528849 +0000 UTC m=+0.023524920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:46:35 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:46:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c296eb205dba7696b1c94929c39195b6c7d809c28c5b05e902703e1a4a29a69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c296eb205dba7696b1c94929c39195b6c7d809c28c5b05e902703e1a4a29a69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c296eb205dba7696b1c94929c39195b6c7d809c28c5b05e902703e1a4a29a69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c296eb205dba7696b1c94929c39195b6c7d809c28c5b05e902703e1a4a29a69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:35 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c296eb205dba7696b1c94929c39195b6c7d809c28c5b05e902703e1a4a29a69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:35 np0005596060 podman[308918]: 2026-01-26 18:46:35.054475795 +0000 UTC m=+0.152471846 container init ef8d09085df73d997202615747c9c8b52d7af7be802ed5947c2e523c9c9ed79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:46:35 np0005596060 podman[308918]: 2026-01-26 18:46:35.06184902 +0000 UTC m=+0.159845061 container start ef8d09085df73d997202615747c9c8b52d7af7be802ed5947c2e523c9c9ed79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 13:46:35 np0005596060 podman[308918]: 2026-01-26 18:46:35.064446225 +0000 UTC m=+0.162442276 container attach ef8d09085df73d997202615747c9c8b52d7af7be802ed5947c2e523c9c9ed79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:46:35 np0005596060 jovial_williams[308934]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:46:35 np0005596060 jovial_williams[308934]: --> relative data size: 1.0
Jan 26 13:46:35 np0005596060 jovial_williams[308934]: --> All data devices are unavailable
Jan 26 13:46:35 np0005596060 systemd[1]: libpod-ef8d09085df73d997202615747c9c8b52d7af7be802ed5947c2e523c9c9ed79a.scope: Deactivated successfully.
Jan 26 13:46:35 np0005596060 podman[308949]: 2026-01-26 18:46:35.879339656 +0000 UTC m=+0.024565626 container died ef8d09085df73d997202615747c9c8b52d7af7be802ed5947c2e523c9c9ed79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:46:35 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0c296eb205dba7696b1c94929c39195b6c7d809c28c5b05e902703e1a4a29a69-merged.mount: Deactivated successfully.
Jan 26 13:46:35 np0005596060 podman[308949]: 2026-01-26 18:46:35.925115181 +0000 UTC m=+0.070341151 container remove ef8d09085df73d997202615747c9c8b52d7af7be802ed5947c2e523c9c9ed79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 13:46:35 np0005596060 systemd[1]: libpod-conmon-ef8d09085df73d997202615747c9c8b52d7af7be802ed5947c2e523c9c9ed79a.scope: Deactivated successfully.
Jan 26 13:46:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:36.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Jan 26 13:46:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Jan 26 13:46:36 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Jan 26 13:46:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:36.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:36 np0005596060 podman[309104]: 2026-01-26 18:46:36.574896221 +0000 UTC m=+0.045679054 container create 89f69d6e5bf3602e7a982f7e796c2cc9358179fdd7424dc801f0d5d158979628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:46:36 np0005596060 systemd[1]: Started libpod-conmon-89f69d6e5bf3602e7a982f7e796c2cc9358179fdd7424dc801f0d5d158979628.scope.
Jan 26 13:46:36 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:46:36 np0005596060 podman[309104]: 2026-01-26 18:46:36.552123281 +0000 UTC m=+0.022906214 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:46:36 np0005596060 podman[309104]: 2026-01-26 18:46:36.660293238 +0000 UTC m=+0.131076171 container init 89f69d6e5bf3602e7a982f7e796c2cc9358179fdd7424dc801f0d5d158979628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 13:46:36 np0005596060 podman[309104]: 2026-01-26 18:46:36.667368135 +0000 UTC m=+0.138150968 container start 89f69d6e5bf3602e7a982f7e796c2cc9358179fdd7424dc801f0d5d158979628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 13:46:36 np0005596060 podman[309104]: 2026-01-26 18:46:36.670651187 +0000 UTC m=+0.141434060 container attach 89f69d6e5bf3602e7a982f7e796c2cc9358179fdd7424dc801f0d5d158979628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:46:36 np0005596060 blissful_wing[309120]: 167 167
Jan 26 13:46:36 np0005596060 systemd[1]: libpod-89f69d6e5bf3602e7a982f7e796c2cc9358179fdd7424dc801f0d5d158979628.scope: Deactivated successfully.
Jan 26 13:46:36 np0005596060 podman[309104]: 2026-01-26 18:46:36.674317039 +0000 UTC m=+0.145099872 container died 89f69d6e5bf3602e7a982f7e796c2cc9358179fdd7424dc801f0d5d158979628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:46:36 np0005596060 nova_compute[247421]: 2026-01-26 18:46:36.672 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Updating instance_info_cache with network_info: [{"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:46:36 np0005596060 nova_compute[247421]: 2026-01-26 18:46:36.691 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Releasing lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:46:36 np0005596060 nova_compute[247421]: 2026-01-26 18:46:36.691 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 26 13:46:36 np0005596060 nova_compute[247421]: 2026-01-26 18:46:36.691 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:46:36 np0005596060 nova_compute[247421]: 2026-01-26 18:46:36.692 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:46:36 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ee648b88dfa9346fd0e16d2bdc50a6f04e089d5e052000729f220b5af882889c-merged.mount: Deactivated successfully.
Jan 26 13:46:36 np0005596060 podman[309104]: 2026-01-26 18:46:36.709377876 +0000 UTC m=+0.180160709 container remove 89f69d6e5bf3602e7a982f7e796c2cc9358179fdd7424dc801f0d5d158979628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wing, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:46:36 np0005596060 nova_compute[247421]: 2026-01-26 18:46:36.715 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:46:36 np0005596060 nova_compute[247421]: 2026-01-26 18:46:36.716 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:46:36 np0005596060 nova_compute[247421]: 2026-01-26 18:46:36.717 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:46:36 np0005596060 nova_compute[247421]: 2026-01-26 18:46:36.717 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:46:36 np0005596060 nova_compute[247421]: 2026-01-26 18:46:36.717 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:46:36 np0005596060 systemd[1]: libpod-conmon-89f69d6e5bf3602e7a982f7e796c2cc9358179fdd7424dc801f0d5d158979628.scope: Deactivated successfully.
Jan 26 13:46:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:36 np0005596060 podman[309145]: 2026-01-26 18:46:36.887024241 +0000 UTC m=+0.053384496 container create bf16031ed0bf04fa4b660391f2457b0201f9508ec7321d194de90ac99866e1b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:46:36 np0005596060 systemd[1]: Started libpod-conmon-bf16031ed0bf04fa4b660391f2457b0201f9508ec7321d194de90ac99866e1b8.scope.
Jan 26 13:46:36 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:46:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a616f90a7004e64ddcef65ad55fc4fb68c88b3749e68486c01478ca74f90030/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a616f90a7004e64ddcef65ad55fc4fb68c88b3749e68486c01478ca74f90030/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:36 np0005596060 podman[309145]: 2026-01-26 18:46:36.868359614 +0000 UTC m=+0.034719949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:46:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a616f90a7004e64ddcef65ad55fc4fb68c88b3749e68486c01478ca74f90030/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:36 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a616f90a7004e64ddcef65ad55fc4fb68c88b3749e68486c01478ca74f90030/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:36 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 278 MiB data, 517 MiB used, 20 GiB / 21 GiB avail; 8.8 MiB/s rd, 16 MiB/s wr, 177 op/s
Jan 26 13:46:36 np0005596060 podman[309145]: 2026-01-26 18:46:36.984391978 +0000 UTC m=+0.150752313 container init bf16031ed0bf04fa4b660391f2457b0201f9508ec7321d194de90ac99866e1b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:46:36 np0005596060 podman[309145]: 2026-01-26 18:46:36.995789133 +0000 UTC m=+0.162149388 container start bf16031ed0bf04fa4b660391f2457b0201f9508ec7321d194de90ac99866e1b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 13:46:36 np0005596060 podman[309145]: 2026-01-26 18:46:36.999357722 +0000 UTC m=+0.165718077 container attach bf16031ed0bf04fa4b660391f2457b0201f9508ec7321d194de90ac99866e1b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:46:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:46:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1627219671' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.162 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.234 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.235 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.371 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.372 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4340MB free_disk=20.936378479003906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.372 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.372 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.453 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 8392b231-e975-4b6c-b6e8-e2a5101c59fa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.453 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.453 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.482 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]: {
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:    "1": [
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:        {
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "devices": [
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "/dev/loop3"
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            ],
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "lv_name": "ceph_lv0",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "lv_size": "7511998464",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "name": "ceph_lv0",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "tags": {
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.cluster_name": "ceph",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.crush_device_class": "",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.encrypted": "0",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.osd_id": "1",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.type": "block",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:                "ceph.vdo": "0"
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            },
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "type": "block",
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:            "vg_name": "ceph_vg0"
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:        }
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]:    ]
Jan 26 13:46:37 np0005596060 busy_lichterman[309180]: }
Jan 26 13:46:37 np0005596060 systemd[1]: libpod-bf16031ed0bf04fa4b660391f2457b0201f9508ec7321d194de90ac99866e1b8.scope: Deactivated successfully.
Jan 26 13:46:37 np0005596060 podman[309145]: 2026-01-26 18:46:37.845582206 +0000 UTC m=+1.011942461 container died bf16031ed0bf04fa4b660391f2457b0201f9508ec7321d194de90ac99866e1b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:46:37 np0005596060 systemd[1]: var-lib-containers-storage-overlay-0a616f90a7004e64ddcef65ad55fc4fb68c88b3749e68486c01478ca74f90030-merged.mount: Deactivated successfully.
Jan 26 13:46:37 np0005596060 podman[309145]: 2026-01-26 18:46:37.91365396 +0000 UTC m=+1.080014205 container remove bf16031ed0bf04fa4b660391f2457b0201f9508ec7321d194de90ac99866e1b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lichterman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 26 13:46:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:46:37 np0005596060 systemd[1]: libpod-conmon-bf16031ed0bf04fa4b660391f2457b0201f9508ec7321d194de90ac99866e1b8.scope: Deactivated successfully.
Jan 26 13:46:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/563474207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.945 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.956 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:46:37 np0005596060 nova_compute[247421]: 2026-01-26 18:46:37.974 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:46:37 np0005596060 podman[309213]: 2026-01-26 18:46:37.983238491 +0000 UTC m=+0.098200498 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:46:37 np0005596060 podman[309221]: 2026-01-26 18:46:37.984305848 +0000 UTC m=+0.100137937 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.020 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.021 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:46:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:38.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.514 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:46:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:38.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.591 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:38 np0005596060 podman[309413]: 2026-01-26 18:46:38.606438955 +0000 UTC m=+0.045460338 container create 3ab3b6ad753b8ce36fc961f275d98896a98f5df95ab657fdba36bbd6d6b46a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:46:38 np0005596060 systemd[1]: Started libpod-conmon-3ab3b6ad753b8ce36fc961f275d98896a98f5df95ab657fdba36bbd6d6b46a03.scope.
Jan 26 13:46:38 np0005596060 podman[309413]: 2026-01-26 18:46:38.586548978 +0000 UTC m=+0.025570381 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:46:38 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.694 247428 DEBUG nova.compute.manager [req-6ca7d77b-f70c-490d-8651-e96a14824ee9 req-747d0d6c-1c32-441a-890e-8924223ef2ab 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received event network-changed-d41e8380-4816-45cd-bcca-7871397467e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.695 247428 DEBUG nova.compute.manager [req-6ca7d77b-f70c-490d-8651-e96a14824ee9 req-747d0d6c-1c32-441a-890e-8924223ef2ab 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Refreshing instance network info cache due to event network-changed-d41e8380-4816-45cd-bcca-7871397467e5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.695 247428 DEBUG oslo_concurrency.lockutils [req-6ca7d77b-f70c-490d-8651-e96a14824ee9 req-747d0d6c-1c32-441a-890e-8924223ef2ab 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.695 247428 DEBUG oslo_concurrency.lockutils [req-6ca7d77b-f70c-490d-8651-e96a14824ee9 req-747d0d6c-1c32-441a-890e-8924223ef2ab 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.695 247428 DEBUG nova.network.neutron [req-6ca7d77b-f70c-490d-8651-e96a14824ee9 req-747d0d6c-1c32-441a-890e-8924223ef2ab 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Refreshing network info cache for port d41e8380-4816-45cd-bcca-7871397467e5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:46:38 np0005596060 podman[309413]: 2026-01-26 18:46:38.702392076 +0000 UTC m=+0.141413479 container init 3ab3b6ad753b8ce36fc961f275d98896a98f5df95ab657fdba36bbd6d6b46a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 13:46:38 np0005596060 podman[309413]: 2026-01-26 18:46:38.711222917 +0000 UTC m=+0.150244300 container start 3ab3b6ad753b8ce36fc961f275d98896a98f5df95ab657fdba36bbd6d6b46a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 13:46:38 np0005596060 podman[309413]: 2026-01-26 18:46:38.71453787 +0000 UTC m=+0.153559273 container attach 3ab3b6ad753b8ce36fc961f275d98896a98f5df95ab657fdba36bbd6d6b46a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:46:38 np0005596060 xenodochial_leakey[309429]: 167 167
Jan 26 13:46:38 np0005596060 systemd[1]: libpod-3ab3b6ad753b8ce36fc961f275d98896a98f5df95ab657fdba36bbd6d6b46a03.scope: Deactivated successfully.
Jan 26 13:46:38 np0005596060 podman[309413]: 2026-01-26 18:46:38.721447823 +0000 UTC m=+0.160469226 container died 3ab3b6ad753b8ce36fc961f275d98896a98f5df95ab657fdba36bbd6d6b46a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:46:38 np0005596060 systemd[1]: var-lib-containers-storage-overlay-19f960496cd07a96d11a9be784dc951343660514170c5c3caadf2692b05f3283-merged.mount: Deactivated successfully.
Jan 26 13:46:38 np0005596060 podman[309413]: 2026-01-26 18:46:38.759483875 +0000 UTC m=+0.198505258 container remove 3ab3b6ad753b8ce36fc961f275d98896a98f5df95ab657fdba36bbd6d6b46a03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_leakey, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:46:38 np0005596060 systemd[1]: libpod-conmon-3ab3b6ad753b8ce36fc961f275d98896a98f5df95ab657fdba36bbd6d6b46a03.scope: Deactivated successfully.
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.847 247428 DEBUG oslo_concurrency.lockutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Acquiring lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.847 247428 DEBUG oslo_concurrency.lockutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.848 247428 DEBUG oslo_concurrency.lockutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Acquiring lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.848 247428 DEBUG oslo_concurrency.lockutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.849 247428 DEBUG oslo_concurrency.lockutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.851 247428 INFO nova.compute.manager [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Terminating instance#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.853 247428 DEBUG nova.compute.manager [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:46:38 np0005596060 kernel: tapd41e8380-48 (unregistering): left promiscuous mode
Jan 26 13:46:38 np0005596060 NetworkManager[48900]: <info>  [1769453198.9108] device (tapd41e8380-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.933 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.936 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:38 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:38Z|00202|binding|INFO|Releasing lport d41e8380-4816-45cd-bcca-7871397467e5 from this chassis (sb_readonly=0)
Jan 26 13:46:38 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:38Z|00203|binding|INFO|Setting lport d41e8380-4816-45cd-bcca-7871397467e5 down in Southbound
Jan 26 13:46:38 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:38Z|00204|binding|INFO|Removing iface tapd41e8380-48 ovn-installed in OVS
Jan 26 13:46:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:38.940 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:1c:1b 10.100.0.12'], port_security=['fa:16:3e:bc:1c:1b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8392b231-e975-4b6c-b6e8-e2a5101c59fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c92bd0c-b67a-4232-823a-830d97d73785', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6d1f7624fe846da936bdf952d988dca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24e47fcc-5b62-4556-b880-35104e4b6ec2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4ce7d98-bbfb-4f37-af96-1528ef95ee96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=d41e8380-4816-45cd-bcca-7871397467e5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:46:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:38.941 159331 INFO neutron.agent.ovn.metadata.agent [-] Port d41e8380-4816-45cd-bcca-7871397467e5 in datapath 3c92bd0c-b67a-4232-823a-830d97d73785 unbound from our chassis#033[00m
Jan 26 13:46:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:38.942 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c92bd0c-b67a-4232-823a-830d97d73785, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:46:38 np0005596060 podman[309453]: 2026-01-26 18:46:38.944011032 +0000 UTC m=+0.054277029 container create 118a46c0ccce2754adc0b187a1ab30c694ac2c151ef5a0d12ae2a3f300831a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 13:46:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:38.945 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b08987ce-6e69-4300-ad69-8341a8be391f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:46:38 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:38.946 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785 namespace which is not needed anymore#033[00m
Jan 26 13:46:38 np0005596060 nova_compute[247421]: 2026-01-26 18:46:38.950 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:38 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 221 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 9.5 MiB/s wr, 176 op/s
Jan 26 13:46:38 np0005596060 systemd[1]: Started libpod-conmon-118a46c0ccce2754adc0b187a1ab30c694ac2c151ef5a0d12ae2a3f300831a27.scope.
Jan 26 13:46:38 np0005596060 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Jan 26 13:46:38 np0005596060 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000001e.scope: Consumed 15.288s CPU time.
Jan 26 13:46:38 np0005596060 systemd-machined[213879]: Machine qemu-17-instance-0000001e terminated.
Jan 26 13:46:39 np0005596060 podman[309453]: 2026-01-26 18:46:38.923872989 +0000 UTC m=+0.034138966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:46:39 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:46:39 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864da470a82b8ce2b3ac63e15a73c5673e0535a64a30c6fe9b1642ec7d8e5f70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:39 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864da470a82b8ce2b3ac63e15a73c5673e0535a64a30c6fe9b1642ec7d8e5f70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:39 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864da470a82b8ce2b3ac63e15a73c5673e0535a64a30c6fe9b1642ec7d8e5f70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:39 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/864da470a82b8ce2b3ac63e15a73c5673e0535a64a30c6fe9b1642ec7d8e5f70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:46:39 np0005596060 podman[309453]: 2026-01-26 18:46:39.05142475 +0000 UTC m=+0.161690787 container init 118a46c0ccce2754adc0b187a1ab30c694ac2c151ef5a0d12ae2a3f300831a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tesla, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 13:46:39 np0005596060 podman[309453]: 2026-01-26 18:46:39.064450386 +0000 UTC m=+0.174716343 container start 118a46c0ccce2754adc0b187a1ab30c694ac2c151ef5a0d12ae2a3f300831a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tesla, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 13:46:39 np0005596060 podman[309453]: 2026-01-26 18:46:39.068470107 +0000 UTC m=+0.178736094 container attach 118a46c0ccce2754adc0b187a1ab30c694ac2c151ef5a0d12ae2a3f300831a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tesla, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 13:46:39 np0005596060 neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785[308279]: [NOTICE]   (308283) : haproxy version is 2.8.14-c23fe91
Jan 26 13:46:39 np0005596060 neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785[308279]: [NOTICE]   (308283) : path to executable is /usr/sbin/haproxy
Jan 26 13:46:39 np0005596060 neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785[308279]: [WARNING]  (308283) : Exiting Master process...
Jan 26 13:46:39 np0005596060 neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785[308279]: [WARNING]  (308283) : Exiting Master process...
Jan 26 13:46:39 np0005596060 neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785[308279]: [ALERT]    (308283) : Current worker (308285) exited with code 143 (Terminated)
Jan 26 13:46:39 np0005596060 neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785[308279]: [WARNING]  (308283) : All workers exited. Exiting... (0)
Jan 26 13:46:39 np0005596060 kernel: tapd41e8380-48: entered promiscuous mode
Jan 26 13:46:39 np0005596060 systemd-udevd[309472]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:46:39 np0005596060 NetworkManager[48900]: <info>  [1769453199.0793] manager: (tapd41e8380-48): new Tun device (/org/freedesktop/NetworkManager/Devices/108)
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.079 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:39Z|00205|binding|INFO|Claiming lport d41e8380-4816-45cd-bcca-7871397467e5 for this chassis.
Jan 26 13:46:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:39Z|00206|binding|INFO|d41e8380-4816-45cd-bcca-7871397467e5: Claiming fa:16:3e:bc:1c:1b 10.100.0.12
Jan 26 13:46:39 np0005596060 systemd[1]: libpod-76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3.scope: Deactivated successfully.
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.088 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:1c:1b 10.100.0.12'], port_security=['fa:16:3e:bc:1c:1b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8392b231-e975-4b6c-b6e8-e2a5101c59fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c92bd0c-b67a-4232-823a-830d97d73785', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6d1f7624fe846da936bdf952d988dca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24e47fcc-5b62-4556-b880-35104e4b6ec2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4ce7d98-bbfb-4f37-af96-1528ef95ee96, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=d41e8380-4816-45cd-bcca-7871397467e5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:46:39 np0005596060 podman[309494]: 2026-01-26 18:46:39.093661707 +0000 UTC m=+0.062658099 container died 76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 13:46:39 np0005596060 kernel: tapd41e8380-48 (unregistering): left promiscuous mode
Jan 26 13:46:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:39Z|00207|binding|INFO|Setting lport d41e8380-4816-45cd-bcca-7871397467e5 ovn-installed in OVS
Jan 26 13:46:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:39Z|00208|binding|INFO|Setting lport d41e8380-4816-45cd-bcca-7871397467e5 up in Southbound
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.106 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:39 np0005596060 virtnodedevd[247152]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 26 13:46:39 np0005596060 virtnodedevd[247152]: hostname: compute-0
Jan 26 13:46:39 np0005596060 virtnodedevd[247152]: ethtool ioctl error on tapd41e8380-48: No such device
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.109 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:39Z|00209|binding|INFO|Releasing lport d41e8380-4816-45cd-bcca-7871397467e5 from this chassis (sb_readonly=0)
Jan 26 13:46:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:39Z|00210|binding|INFO|Setting lport d41e8380-4816-45cd-bcca-7871397467e5 down in Southbound
Jan 26 13:46:39 np0005596060 ovn_controller[148842]: 2026-01-26T18:46:39Z|00211|binding|INFO|Removing iface tapd41e8380-48 ovn-installed in OVS
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.113 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:39 np0005596060 virtnodedevd[247152]: ethtool ioctl error on tapd41e8380-48: No such device
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.119 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:1c:1b 10.100.0.12'], port_security=['fa:16:3e:bc:1c:1b 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '8392b231-e975-4b6c-b6e8-e2a5101c59fa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3c92bd0c-b67a-4232-823a-830d97d73785', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f6d1f7624fe846da936bdf952d988dca', 'neutron:revision_number': '4', 'neutron:security_group_ids': '24e47fcc-5b62-4556-b880-35104e4b6ec2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b4ce7d98-bbfb-4f37-af96-1528ef95ee96, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=d41e8380-4816-45cd-bcca-7871397467e5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:46:39 np0005596060 virtnodedevd[247152]: ethtool ioctl error on tapd41e8380-48: No such device
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.126 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.129 247428 INFO nova.virt.libvirt.driver [-] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Instance destroyed successfully.#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.129 247428 DEBUG nova.objects.instance [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lazy-loading 'resources' on Instance uuid 8392b231-e975-4b6c-b6e8-e2a5101c59fa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:46:39 np0005596060 virtnodedevd[247152]: ethtool ioctl error on tapd41e8380-48: No such device
Jan 26 13:46:39 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3-userdata-shm.mount: Deactivated successfully.
Jan 26 13:46:39 np0005596060 virtnodedevd[247152]: ethtool ioctl error on tapd41e8380-48: No such device
Jan 26 13:46:39 np0005596060 systemd[1]: var-lib-containers-storage-overlay-96aad4547eb46c7e9c033fb3a85214f060c6d41009262efa3deaf8790f53b28f-merged.mount: Deactivated successfully.
Jan 26 13:46:39 np0005596060 virtnodedevd[247152]: ethtool ioctl error on tapd41e8380-48: No such device
Jan 26 13:46:39 np0005596060 podman[309494]: 2026-01-26 18:46:39.146730165 +0000 UTC m=+0.115726557 container cleanup 76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.145 247428 DEBUG nova.virt.libvirt.vif [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:45:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1945150663',display_name='tempest-TestSnapshotPattern-server-1945150663',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1945150663',id=30,image_ref='78a38f51-2188-4186-ba53-2edab9be0ff2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCAYmVs+UW2XJsRtBIbdZbz28ZVdt7AiOxfdjjSsjnkL6p6XTA2fhA867rw0hqdCm+lPM0yPV4ff9dVLHk7OAzo0CgTYKG/4Lv9EiKZeI+OUhOQtFQJysHTnBrgkAFHfCQ==',key_name='tempest-TestSnapshotPattern-1728523139',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:45:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f6d1f7624fe846da936bdf952d988dca',ramdisk_id='',reservation_id='r-bw8lz6zh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='0da4d154-1c5d-435f-bc88-07c4b9e6f79b',image_min_disk='1',image_min_ram='0',image_owner_id='f6d1f7624fe846da936bdf952d988dca',image_owner_project_name='tempest-TestSnapshotPattern-612206442',image_owner_user_name='tempest-TestSnapshotPattern-612206442-project-member',image_user_id='ab4f5e4c36dd409fa5bb8295edb56a1e',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-612206442',owner_user_name='tempest-TestSnapshotPattern-612206442-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:46:34Z,user_data=None,user_id='ab4f5e4c36dd409fa5bb8295edb56a1e',uuid=8392b231-e975-4b6c-b6e8-e2a5101c59fa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.146 247428 DEBUG nova.network.os_vif_util [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Converting VIF {"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.147 247428 DEBUG nova.network.os_vif_util [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bc:1c:1b,bridge_name='br-int',has_traffic_filtering=True,id=d41e8380-4816-45cd-bcca-7871397467e5,network=Network(3c92bd0c-b67a-4232-823a-830d97d73785),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41e8380-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.147 247428 DEBUG os_vif [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:1c:1b,bridge_name='br-int',has_traffic_filtering=True,id=d41e8380-4816-45cd-bcca-7871397467e5,network=Network(3c92bd0c-b67a-4232-823a-830d97d73785),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41e8380-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.148 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.150 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd41e8380-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:46:39 np0005596060 virtnodedevd[247152]: ethtool ioctl error on tapd41e8380-48: No such device
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.157 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:39 np0005596060 systemd[1]: libpod-conmon-76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3.scope: Deactivated successfully.
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.158 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:46:39 np0005596060 virtnodedevd[247152]: ethtool ioctl error on tapd41e8380-48: No such device
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.163 247428 INFO os_vif [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bc:1c:1b,bridge_name='br-int',has_traffic_filtering=True,id=d41e8380-4816-45cd-bcca-7871397467e5,network=Network(3c92bd0c-b67a-4232-823a-830d97d73785),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd41e8380-48')#033[00m
Jan 26 13:46:39 np0005596060 podman[309544]: 2026-01-26 18:46:39.219771623 +0000 UTC m=+0.041115970 container remove 76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.224 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[95270a06-e1d6-406c-8ee3-3a1f97322f31]: (4, ('Mon Jan 26 06:46:39 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785 (76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3)\n76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3\nMon Jan 26 06:46:39 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785 (76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3)\n76a41a36668fb07a00545d58a856713ffeafcdaf3a837d5803fe10dc257c8de3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.226 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[2c4ad659-d1f2-4057-8f65-6fa35aee14c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.228 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c92bd0c-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.229 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:39 np0005596060 kernel: tap3c92bd0c-b0: left promiscuous mode
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.242 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.245 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e4e1b6-defc-4fff-9d97-af5083a2a0e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.260 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2f09b6-6dad-4352-b68e-c2fd288067c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.262 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c4bd95-f017-44a0-ba93-151651e6d817]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.278 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b2c3b3-fab7-4725-9a38-63e5613900f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 691468, 'reachable_time': 43564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309576, 'error': None, 'target': 'ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:46:39 np0005596060 systemd[1]: run-netns-ovnmeta\x2d3c92bd0c\x2db67a\x2d4232\x2d823a\x2d830d97d73785.mount: Deactivated successfully.
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.283 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3c92bd0c-b67a-4232-823a-830d97d73785 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.284 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[0630155a-db78-47ff-85fb-3596f299c562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.285 159331 INFO neutron.agent.ovn.metadata.agent [-] Port d41e8380-4816-45cd-bcca-7871397467e5 in datapath 3c92bd0c-b67a-4232-823a-830d97d73785 unbound from our chassis#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.286 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c92bd0c-b67a-4232-823a-830d97d73785, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.286 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c7daf70a-cb0f-470a-a0ec-4cb339ce1bbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.287 159331 INFO neutron.agent.ovn.metadata.agent [-] Port d41e8380-4816-45cd-bcca-7871397467e5 in datapath 3c92bd0c-b67a-4232-823a-830d97d73785 unbound from our chassis#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.288 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3c92bd0c-b67a-4232-823a-830d97d73785, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:46:39 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:39.289 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[05c003bd-6941-4a8e-8057-f349f09f7e31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.559 247428 INFO nova.virt.libvirt.driver [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Deleting instance files /var/lib/nova/instances/8392b231-e975-4b6c-b6e8-e2a5101c59fa_del#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.560 247428 INFO nova.virt.libvirt.driver [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Deletion of /var/lib/nova/instances/8392b231-e975-4b6c-b6e8-e2a5101c59fa_del complete#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.618 247428 INFO nova.compute.manager [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.619 247428 DEBUG oslo.service.loopingcall [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.619 247428 DEBUG nova.compute.manager [-] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.619 247428 DEBUG nova.network.neutron [-] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.687 247428 DEBUG nova.compute.manager [req-987e2017-2c17-4da9-ba39-fe038e5c3c1c req-6cc19259-c5e2-4cfe-bfdb-e8d58c338e46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received event network-vif-unplugged-d41e8380-4816-45cd-bcca-7871397467e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.687 247428 DEBUG oslo_concurrency.lockutils [req-987e2017-2c17-4da9-ba39-fe038e5c3c1c req-6cc19259-c5e2-4cfe-bfdb-e8d58c338e46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.688 247428 DEBUG oslo_concurrency.lockutils [req-987e2017-2c17-4da9-ba39-fe038e5c3c1c req-6cc19259-c5e2-4cfe-bfdb-e8d58c338e46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.688 247428 DEBUG oslo_concurrency.lockutils [req-987e2017-2c17-4da9-ba39-fe038e5c3c1c req-6cc19259-c5e2-4cfe-bfdb-e8d58c338e46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.688 247428 DEBUG nova.compute.manager [req-987e2017-2c17-4da9-ba39-fe038e5c3c1c req-6cc19259-c5e2-4cfe-bfdb-e8d58c338e46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] No waiting events found dispatching network-vif-unplugged-d41e8380-4816-45cd-bcca-7871397467e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.688 247428 DEBUG nova.compute.manager [req-987e2017-2c17-4da9-ba39-fe038e5c3c1c req-6cc19259-c5e2-4cfe-bfdb-e8d58c338e46 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received event network-vif-unplugged-d41e8380-4816-45cd-bcca-7871397467e5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:46:39 np0005596060 cool_tesla[309484]: {
Jan 26 13:46:39 np0005596060 cool_tesla[309484]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:46:39 np0005596060 cool_tesla[309484]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:46:39 np0005596060 cool_tesla[309484]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:46:39 np0005596060 cool_tesla[309484]:        "osd_id": 1,
Jan 26 13:46:39 np0005596060 cool_tesla[309484]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:46:39 np0005596060 cool_tesla[309484]:        "type": "bluestore"
Jan 26 13:46:39 np0005596060 cool_tesla[309484]:    }
Jan 26 13:46:39 np0005596060 cool_tesla[309484]: }
Jan 26 13:46:39 np0005596060 systemd[1]: libpod-118a46c0ccce2754adc0b187a1ab30c694ac2c151ef5a0d12ae2a3f300831a27.scope: Deactivated successfully.
Jan 26 13:46:39 np0005596060 podman[309453]: 2026-01-26 18:46:39.968892778 +0000 UTC m=+1.079158735 container died 118a46c0ccce2754adc0b187a1ab30c694ac2c151ef5a0d12ae2a3f300831a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tesla, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:46:39 np0005596060 nova_compute[247421]: 2026-01-26 18:46:39.980 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:46:40 np0005596060 systemd[1]: var-lib-containers-storage-overlay-864da470a82b8ce2b3ac63e15a73c5673e0535a64a30c6fe9b1642ec7d8e5f70-merged.mount: Deactivated successfully.
Jan 26 13:46:40 np0005596060 podman[309453]: 2026-01-26 18:46:40.029484124 +0000 UTC m=+1.139750081 container remove 118a46c0ccce2754adc0b187a1ab30c694ac2c151ef5a0d12ae2a3f300831a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 13:46:40 np0005596060 systemd[1]: libpod-conmon-118a46c0ccce2754adc0b187a1ab30c694ac2c151ef5a0d12ae2a3f300831a27.scope: Deactivated successfully.
Jan 26 13:46:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:46:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:46:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:46:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:46:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 22ee3101-2dcc-47e7-95ce-542f6e674713 does not exist
Jan 26 13:46:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e73191cd-e904-40e9-80c4-83a2d78220f4 does not exist
Jan 26 13:46:40 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8792872a-b2c6-4b6a-a9e1-e1061b201924 does not exist
Jan 26 13:46:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:40.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:46:40 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:46:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:40.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:40 np0005596060 nova_compute[247421]: 2026-01-26 18:46:40.740 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:40.740 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:46:40 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:40.741 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:46:40 np0005596060 nova_compute[247421]: 2026-01-26 18:46:40.761 247428 DEBUG nova.network.neutron [-] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:46:40 np0005596060 nova_compute[247421]: 2026-01-26 18:46:40.829 247428 INFO nova.compute.manager [-] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Took 1.21 seconds to deallocate network for instance.#033[00m
Jan 26 13:46:40 np0005596060 nova_compute[247421]: 2026-01-26 18:46:40.875 247428 DEBUG nova.network.neutron [req-6ca7d77b-f70c-490d-8651-e96a14824ee9 req-747d0d6c-1c32-441a-890e-8924223ef2ab 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Updated VIF entry in instance network info cache for port d41e8380-4816-45cd-bcca-7871397467e5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:46:40 np0005596060 nova_compute[247421]: 2026-01-26 18:46:40.876 247428 DEBUG nova.network.neutron [req-6ca7d77b-f70c-490d-8651-e96a14824ee9 req-747d0d6c-1c32-441a-890e-8924223ef2ab 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Updating instance_info_cache with network_info: [{"id": "d41e8380-4816-45cd-bcca-7871397467e5", "address": "fa:16:3e:bc:1c:1b", "network": {"id": "3c92bd0c-b67a-4232-823a-830d97d73785", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-964278989-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f6d1f7624fe846da936bdf952d988dca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd41e8380-48", "ovs_interfaceid": "d41e8380-4816-45cd-bcca-7871397467e5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:46:40 np0005596060 nova_compute[247421]: 2026-01-26 18:46:40.885 247428 DEBUG oslo_concurrency.lockutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:46:40 np0005596060 nova_compute[247421]: 2026-01-26 18:46:40.886 247428 DEBUG oslo_concurrency.lockutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:46:40 np0005596060 nova_compute[247421]: 2026-01-26 18:46:40.893 247428 DEBUG nova.compute.manager [req-645999a2-098f-4a01-9e31-e693d8342b91 req-357288c0-5647-473a-b422-08768ca83a7c 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received event network-vif-deleted-d41e8380-4816-45cd-bcca-7871397467e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:46:40 np0005596060 nova_compute[247421]: 2026-01-26 18:46:40.898 247428 DEBUG oslo_concurrency.lockutils [req-6ca7d77b-f70c-490d-8651-e96a14824ee9 req-747d0d6c-1c32-441a-890e-8924223ef2ab 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-8392b231-e975-4b6c-b6e8-e2a5101c59fa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:46:40 np0005596060 nova_compute[247421]: 2026-01-26 18:46:40.944 247428 DEBUG oslo_concurrency.processutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:46:40 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 221 MiB data, 540 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.9 MiB/s wr, 81 op/s
Jan 26 13:46:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:46:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564456212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.373 247428 DEBUG oslo_concurrency.processutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.381 247428 DEBUG nova.compute.provider_tree [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.398 247428 DEBUG nova.scheduler.client.report [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.424 247428 DEBUG oslo_concurrency.lockutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.538s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.451 247428 INFO nova.scheduler.client.report [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Deleted allocations for instance 8392b231-e975-4b6c-b6e8-e2a5101c59fa#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.513 247428 DEBUG oslo_concurrency.lockutils [None req-4bc2a221-2eda-4559-810a-ede6bbec1b4b ab4f5e4c36dd409fa5bb8295edb56a1e f6d1f7624fe846da936bdf952d988dca - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:46:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Jan 26 13:46:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.857 247428 DEBUG nova.compute.manager [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received event network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.858 247428 DEBUG oslo_concurrency.lockutils [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.858 247428 DEBUG oslo_concurrency.lockutils [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.858 247428 DEBUG oslo_concurrency.lockutils [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.858 247428 DEBUG nova.compute.manager [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] No waiting events found dispatching network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.859 247428 WARNING nova.compute.manager [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received unexpected event network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 for instance with vm_state deleted and task_state None.#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.859 247428 DEBUG nova.compute.manager [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received event network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.859 247428 DEBUG oslo_concurrency.lockutils [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.859 247428 DEBUG oslo_concurrency.lockutils [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.859 247428 DEBUG oslo_concurrency.lockutils [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "8392b231-e975-4b6c-b6e8-e2a5101c59fa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.860 247428 DEBUG nova.compute.manager [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] No waiting events found dispatching network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:46:41 np0005596060 nova_compute[247421]: 2026-01-26 18:46:41.860 247428 WARNING nova.compute.manager [req-ca3e9803-6190-4383-ab55-5ee80ffac676 req-25ea89ac-77f9-4365-b1b5-c3dfeab5e25f 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Received unexpected event network-vif-plugged-d41e8380-4816-45cd-bcca-7871397467e5 for instance with vm_state deleted and task_state None.#033[00m
Jan 26 13:46:41 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Jan 26 13:46:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:42.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:42.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:42 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 200 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.9 MiB/s wr, 121 op/s
Jan 26 13:46:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Jan 26 13:46:43 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Jan 26 13:46:43 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Jan 26 13:46:43 np0005596060 nova_compute[247421]: 2026-01-26 18:46:43.516 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:46:44 np0005596060 nova_compute[247421]: 2026-01-26 18:46:44.152 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:46:44
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', '.mgr', 'backups', 'images', 'volumes']
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:46:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:46:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:44.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:46:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:44.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 200 MiB data, 466 MiB used, 21 GiB / 21 GiB avail; 352 KiB/s rd, 285 KiB/s wr, 104 op/s
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:46:44 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:46:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:46:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:46:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:46:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:46:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:46:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:46:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:46:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:46.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:46:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:46.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:46:46 np0005596060 nova_compute[247421]: 2026-01-26 18:46:46.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:46:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Jan 26 13:46:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Jan 26 13:46:46 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Jan 26 13:46:46 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 119 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 70 KiB/s rd, 5.5 KiB/s wr, 101 op/s
Jan 26 13:46:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:48.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:48 np0005596060 nova_compute[247421]: 2026-01-26 18:46:48.517 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:48.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:48 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 41 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 84 KiB/s rd, 4.5 KiB/s wr, 120 op/s
Jan 26 13:46:49 np0005596060 nova_compute[247421]: 2026-01-26 18:46:49.155 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:46:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:50.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:46:50 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:46:50.744 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:46:50 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 41 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Jan 26 13:46:51 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj[95144]: Mon Jan 26 18:46:51 2026: A thread timer expired 1.032386 seconds ago
Jan 26 13:46:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:51.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:51 np0005596060 nova_compute[247421]: 2026-01-26 18:46:51.546 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:51 np0005596060 nova_compute[247421]: 2026-01-26 18:46:51.668 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Jan 26 13:46:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Jan 26 13:46:51 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Jan 26 13:46:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:52.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:52 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 41 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Jan 26 13:46:53 np0005596060 nova_compute[247421]: 2026-01-26 18:46:53.518 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:53.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:54 np0005596060 nova_compute[247421]: 2026-01-26 18:46:54.128 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769453199.1265557, 8392b231-e975-4b6c-b6e8-e2a5101c59fa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:46:54 np0005596060 nova_compute[247421]: 2026-01-26 18:46:54.129 247428 INFO nova.compute.manager [-] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:46:54 np0005596060 nova_compute[247421]: 2026-01-26 18:46:54.149 247428 DEBUG nova.compute.manager [None req-b4e5d69b-635c-42be-80f2-236c6e76c2cf - - - - - -] [instance: 8392b231-e975-4b6c-b6e8-e2a5101c59fa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:46:54 np0005596060 nova_compute[247421]: 2026-01-26 18:46:54.156 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:54.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:54 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 41 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.3 KiB/s wr, 39 op/s
Jan 26 13:46:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:55.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:56.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:46:56 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 41 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Jan 26 13:46:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:57.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:46:58.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:46:58 np0005596060 nova_compute[247421]: 2026-01-26 18:46:58.520 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:58 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:46:59 np0005596060 nova_compute[247421]: 2026-01-26 18:46:59.157 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:46:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:46:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:46:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:46:59.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:00.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:00 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:01.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:02.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:02 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:03.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:03 np0005596060 nova_compute[247421]: 2026-01-26 18:47:03.563 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:47:04 np0005596060 nova_compute[247421]: 2026-01-26 18:47:04.158 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:47:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:04.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:47:04 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:05.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:06.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:06 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:47:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:07.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:47:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:08.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:08 np0005596060 nova_compute[247421]: 2026-01-26 18:47:08.603 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:08 np0005596060 podman[309792]: 2026-01-26 18:47:08.806348972 +0000 UTC m=+0.063463280 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 13:47:08 np0005596060 podman[309793]: 2026-01-26 18:47:08.838422714 +0000 UTC m=+0.095765457 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 13:47:08 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:09 np0005596060 nova_compute[247421]: 2026-01-26 18:47:09.160 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:09.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:10.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:10 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:47:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:11.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:47:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:12.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:12 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:47:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:13.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:47:13 np0005596060 nova_compute[247421]: 2026-01-26 18:47:13.661 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:47:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:47:14 np0005596060 nova_compute[247421]: 2026-01-26 18:47:14.162 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:14.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:47:14.774 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:47:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:47:14.775 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:47:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:47:14.775 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:47:14 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:15.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:16.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:16 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:18.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:18 np0005596060 nova_compute[247421]: 2026-01-26 18:47:18.664 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:18 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:19 np0005596060 nova_compute[247421]: 2026-01-26 18:47:19.163 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:47:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:19.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:47:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:47:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:20.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:47:20 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:21.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:22.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:47:23.658 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:47:23 np0005596060 nova_compute[247421]: 2026-01-26 18:47:23.659 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:23 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:47:23.659 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:47:23 np0005596060 nova_compute[247421]: 2026-01-26 18:47:23.666 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:24 np0005596060 nova_compute[247421]: 2026-01-26 18:47:24.165 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:24.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:24 np0005596060 nova_compute[247421]: 2026-01-26 18:47:24.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:24 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:47:24.662 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:47:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:25.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:26.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:27.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:28.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:28 np0005596060 nova_compute[247421]: 2026-01-26 18:47:28.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:28 np0005596060 nova_compute[247421]: 2026-01-26 18:47:28.720 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:29 np0005596060 nova_compute[247421]: 2026-01-26 18:47:29.167 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:47:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:29.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:47:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:30.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:31.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:31 np0005596060 nova_compute[247421]: 2026-01-26 18:47:31.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:32.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:33.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:33 np0005596060 nova_compute[247421]: 2026-01-26 18:47:33.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:33 np0005596060 nova_compute[247421]: 2026-01-26 18:47:33.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:47:33 np0005596060 nova_compute[247421]: 2026-01-26 18:47:33.722 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:34 np0005596060 nova_compute[247421]: 2026-01-26 18:47:34.169 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:34.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:34 np0005596060 nova_compute[247421]: 2026-01-26 18:47:34.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:34 np0005596060 nova_compute[247421]: 2026-01-26 18:47:34.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:47:34 np0005596060 nova_compute[247421]: 2026-01-26 18:47:34.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:47:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.193 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.194 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.194 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.213 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.214 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.214 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.214 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.215 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:47:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:35.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:35 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:47:35 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3710599631' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.695 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.856 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.860 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4612MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.860 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.861 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.985 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:47:35 np0005596060 nova_compute[247421]: 2026-01-26 18:47:35.986 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:47:36 np0005596060 nova_compute[247421]: 2026-01-26 18:47:36.022 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:47:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:36.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:47:36 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/729324361' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:47:36 np0005596060 nova_compute[247421]: 2026-01-26 18:47:36.559 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:47:36 np0005596060 nova_compute[247421]: 2026-01-26 18:47:36.564 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:47:36 np0005596060 nova_compute[247421]: 2026-01-26 18:47:36.586 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:47:36 np0005596060 nova_compute[247421]: 2026-01-26 18:47:36.621 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:47:36 np0005596060 nova_compute[247421]: 2026-01-26 18:47:36.622 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:47:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:47:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:37.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:47:38 np0005596060 nova_compute[247421]: 2026-01-26 18:47:38.078 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:38 np0005596060 nova_compute[247421]: 2026-01-26 18:47:38.725 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:39 np0005596060 nova_compute[247421]: 2026-01-26 18:47:39.171 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:39.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:39 np0005596060 nova_compute[247421]: 2026-01-26 18:47:39.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:39 np0005596060 podman[309951]: 2026-01-26 18:47:39.825836633 +0000 UTC m=+0.077112601 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 13:47:39 np0005596060 podman[309952]: 2026-01-26 18:47:39.867128446 +0000 UTC m=+0.112792583 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 13:47:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:40.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:47:41 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d01a66ee-e4ef-4c02-a161-bdf1101fe315 does not exist
Jan 26 13:47:41 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 5b56b261-00eb-47fb-a080-107f7d485ff1 does not exist
Jan 26 13:47:41 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d13f734d-58f5-4332-a7b8-5c56fcfa2213 does not exist
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:47:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:41.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:41 np0005596060 podman[310268]: 2026-01-26 18:47:41.892959129 +0000 UTC m=+0.037451779 container create ad60ebc5f455af287425edd8375cc141104680a4add27d7807082f08e61dfe9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pike, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:47:41 np0005596060 systemd[1]: Started libpod-conmon-ad60ebc5f455af287425edd8375cc141104680a4add27d7807082f08e61dfe9c.scope.
Jan 26 13:47:41 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:47:41 np0005596060 podman[310268]: 2026-01-26 18:47:41.877869701 +0000 UTC m=+0.022362371 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:47:41 np0005596060 podman[310268]: 2026-01-26 18:47:41.98731391 +0000 UTC m=+0.131806590 container init ad60ebc5f455af287425edd8375cc141104680a4add27d7807082f08e61dfe9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 26 13:47:41 np0005596060 podman[310268]: 2026-01-26 18:47:41.993972276 +0000 UTC m=+0.138464926 container start ad60ebc5f455af287425edd8375cc141104680a4add27d7807082f08e61dfe9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 13:47:41 np0005596060 podman[310268]: 2026-01-26 18:47:41.99689997 +0000 UTC m=+0.141392650 container attach ad60ebc5f455af287425edd8375cc141104680a4add27d7807082f08e61dfe9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:47:42 np0005596060 reverent_pike[310283]: 167 167
Jan 26 13:47:42 np0005596060 systemd[1]: libpod-ad60ebc5f455af287425edd8375cc141104680a4add27d7807082f08e61dfe9c.scope: Deactivated successfully.
Jan 26 13:47:42 np0005596060 conmon[310283]: conmon ad60ebc5f455af287425 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad60ebc5f455af287425edd8375cc141104680a4add27d7807082f08e61dfe9c.scope/container/memory.events
Jan 26 13:47:42 np0005596060 podman[310268]: 2026-01-26 18:47:42.002743726 +0000 UTC m=+0.147236376 container died ad60ebc5f455af287425edd8375cc141104680a4add27d7807082f08e61dfe9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:47:42 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d8a3e4f8788253ab1e19aaf672ac942f471d63067f6330c197075189977ec5f9-merged.mount: Deactivated successfully.
Jan 26 13:47:42 np0005596060 podman[310268]: 2026-01-26 18:47:42.040330656 +0000 UTC m=+0.184823306 container remove ad60ebc5f455af287425edd8375cc141104680a4add27d7807082f08e61dfe9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:47:42 np0005596060 systemd[1]: libpod-conmon-ad60ebc5f455af287425edd8375cc141104680a4add27d7807082f08e61dfe9c.scope: Deactivated successfully.
Jan 26 13:47:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:47:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:47:42 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:47:42 np0005596060 podman[310306]: 2026-01-26 18:47:42.215406536 +0000 UTC m=+0.047694874 container create e806d5f720cfeda64ed3d62ccc4c196d6a8e2417f2553861a2c1b48d265e5451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:47:42 np0005596060 systemd[1]: Started libpod-conmon-e806d5f720cfeda64ed3d62ccc4c196d6a8e2417f2553861a2c1b48d265e5451.scope.
Jan 26 13:47:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:42.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:42 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:47:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313340a9d816def40420c98cc64e16783c345ec8a0411037128dd034f91e5ba8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313340a9d816def40420c98cc64e16783c345ec8a0411037128dd034f91e5ba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313340a9d816def40420c98cc64e16783c345ec8a0411037128dd034f91e5ba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313340a9d816def40420c98cc64e16783c345ec8a0411037128dd034f91e5ba8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:42 np0005596060 podman[310306]: 2026-01-26 18:47:42.195009966 +0000 UTC m=+0.027298364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:47:42 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313340a9d816def40420c98cc64e16783c345ec8a0411037128dd034f91e5ba8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:42 np0005596060 podman[310306]: 2026-01-26 18:47:42.304994238 +0000 UTC m=+0.137282606 container init e806d5f720cfeda64ed3d62ccc4c196d6a8e2417f2553861a2c1b48d265e5451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:47:42 np0005596060 podman[310306]: 2026-01-26 18:47:42.315667605 +0000 UTC m=+0.147955943 container start e806d5f720cfeda64ed3d62ccc4c196d6a8e2417f2553861a2c1b48d265e5451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:47:42 np0005596060 podman[310306]: 2026-01-26 18:47:42.319693296 +0000 UTC m=+0.151981654 container attach e806d5f720cfeda64ed3d62ccc4c196d6a8e2417f2553861a2c1b48d265e5451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 26 13:47:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:43 np0005596060 gallant_rubin[310323]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:47:43 np0005596060 gallant_rubin[310323]: --> relative data size: 1.0
Jan 26 13:47:43 np0005596060 gallant_rubin[310323]: --> All data devices are unavailable
Jan 26 13:47:43 np0005596060 systemd[1]: libpod-e806d5f720cfeda64ed3d62ccc4c196d6a8e2417f2553861a2c1b48d265e5451.scope: Deactivated successfully.
Jan 26 13:47:43 np0005596060 podman[310306]: 2026-01-26 18:47:43.107387606 +0000 UTC m=+0.939675944 container died e806d5f720cfeda64ed3d62ccc4c196d6a8e2417f2553861a2c1b48d265e5451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:47:43 np0005596060 systemd[1]: var-lib-containers-storage-overlay-313340a9d816def40420c98cc64e16783c345ec8a0411037128dd034f91e5ba8-merged.mount: Deactivated successfully.
Jan 26 13:47:43 np0005596060 podman[310306]: 2026-01-26 18:47:43.167450809 +0000 UTC m=+0.999739147 container remove e806d5f720cfeda64ed3d62ccc4c196d6a8e2417f2553861a2c1b48d265e5451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 13:47:43 np0005596060 systemd[1]: libpod-conmon-e806d5f720cfeda64ed3d62ccc4c196d6a8e2417f2553861a2c1b48d265e5451.scope: Deactivated successfully.
Jan 26 13:47:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:47:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:43.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:47:43 np0005596060 nova_compute[247421]: 2026-01-26 18:47:43.726 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:43 np0005596060 podman[310493]: 2026-01-26 18:47:43.864401779 +0000 UTC m=+0.041147990 container create f32a2c9ddae0ef071c50fc96b1b0a8c0f13893d846c72f9b3fe3c2ac8684eb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:47:43 np0005596060 systemd[1]: Started libpod-conmon-f32a2c9ddae0ef071c50fc96b1b0a8c0f13893d846c72f9b3fe3c2ac8684eb84.scope.
Jan 26 13:47:43 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:47:43 np0005596060 podman[310493]: 2026-01-26 18:47:43.935588221 +0000 UTC m=+0.112334452 container init f32a2c9ddae0ef071c50fc96b1b0a8c0f13893d846c72f9b3fe3c2ac8684eb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:47:43 np0005596060 podman[310493]: 2026-01-26 18:47:43.844098961 +0000 UTC m=+0.020845212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:47:43 np0005596060 podman[310493]: 2026-01-26 18:47:43.942510554 +0000 UTC m=+0.119256765 container start f32a2c9ddae0ef071c50fc96b1b0a8c0f13893d846c72f9b3fe3c2ac8684eb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 26 13:47:43 np0005596060 keen_mclaren[310509]: 167 167
Jan 26 13:47:43 np0005596060 podman[310493]: 2026-01-26 18:47:43.946242057 +0000 UTC m=+0.122988298 container attach f32a2c9ddae0ef071c50fc96b1b0a8c0f13893d846c72f9b3fe3c2ac8684eb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 26 13:47:43 np0005596060 systemd[1]: libpod-f32a2c9ddae0ef071c50fc96b1b0a8c0f13893d846c72f9b3fe3c2ac8684eb84.scope: Deactivated successfully.
Jan 26 13:47:43 np0005596060 podman[310493]: 2026-01-26 18:47:43.947107259 +0000 UTC m=+0.123853490 container died f32a2c9ddae0ef071c50fc96b1b0a8c0f13893d846c72f9b3fe3c2ac8684eb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:47:43 np0005596060 systemd[1]: var-lib-containers-storage-overlay-bd528c7c86053b77b3e78bdfdd2313e9c30e789f479689693adfaaa35a4f7cec-merged.mount: Deactivated successfully.
Jan 26 13:47:43 np0005596060 podman[310493]: 2026-01-26 18:47:43.98873468 +0000 UTC m=+0.165480891 container remove f32a2c9ddae0ef071c50fc96b1b0a8c0f13893d846c72f9b3fe3c2ac8684eb84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:47:43 np0005596060 systemd[1]: libpod-conmon-f32a2c9ddae0ef071c50fc96b1b0a8c0f13893d846c72f9b3fe3c2ac8684eb84.scope: Deactivated successfully.
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:47:44 np0005596060 podman[310532]: 2026-01-26 18:47:44.149961205 +0000 UTC m=+0.054567597 container create 24ee3846c60a477dc691ef18011677e89c7481521f078d0cf2745dcc3ecd9c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:47:44 np0005596060 nova_compute[247421]: 2026-01-26 18:47:44.173 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:47:44
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'backups']
Jan 26 13:47:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:47:44 np0005596060 systemd[1]: Started libpod-conmon-24ee3846c60a477dc691ef18011677e89c7481521f078d0cf2745dcc3ecd9c48.scope.
Jan 26 13:47:44 np0005596060 podman[310532]: 2026-01-26 18:47:44.1217936 +0000 UTC m=+0.026400062 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:47:44 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:47:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dca2cff164b04e987b44431911fadee903d0c202dc8d65683416c73b4abf0ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dca2cff164b04e987b44431911fadee903d0c202dc8d65683416c73b4abf0ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dca2cff164b04e987b44431911fadee903d0c202dc8d65683416c73b4abf0ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:44 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5dca2cff164b04e987b44431911fadee903d0c202dc8d65683416c73b4abf0ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:44 np0005596060 podman[310532]: 2026-01-26 18:47:44.238242884 +0000 UTC m=+0.142849276 container init 24ee3846c60a477dc691ef18011677e89c7481521f078d0cf2745dcc3ecd9c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 13:47:44 np0005596060 podman[310532]: 2026-01-26 18:47:44.244687535 +0000 UTC m=+0.149293917 container start 24ee3846c60a477dc691ef18011677e89c7481521f078d0cf2745dcc3ecd9c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 26 13:47:44 np0005596060 podman[310532]: 2026-01-26 18:47:44.248366427 +0000 UTC m=+0.152972789 container attach 24ee3846c60a477dc691ef18011677e89c7481521f078d0cf2745dcc3ecd9c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 13:47:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:44.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:44 np0005596060 nova_compute[247421]: 2026-01-26 18:47:44.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:47:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]: {
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:    "1": [
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:        {
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "devices": [
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "/dev/loop3"
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            ],
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "lv_name": "ceph_lv0",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "lv_size": "7511998464",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "name": "ceph_lv0",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "tags": {
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.cluster_name": "ceph",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.crush_device_class": "",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.encrypted": "0",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.osd_id": "1",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.type": "block",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:                "ceph.vdo": "0"
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            },
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "type": "block",
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:            "vg_name": "ceph_vg0"
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:        }
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]:    ]
Jan 26 13:47:45 np0005596060 interesting_jackson[310547]: }
Jan 26 13:47:45 np0005596060 systemd[1]: libpod-24ee3846c60a477dc691ef18011677e89c7481521f078d0cf2745dcc3ecd9c48.scope: Deactivated successfully.
Jan 26 13:47:45 np0005596060 podman[310532]: 2026-01-26 18:47:45.045519655 +0000 UTC m=+0.950126047 container died 24ee3846c60a477dc691ef18011677e89c7481521f078d0cf2745dcc3ecd9c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 13:47:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-5dca2cff164b04e987b44431911fadee903d0c202dc8d65683416c73b4abf0ef-merged.mount: Deactivated successfully.
Jan 26 13:47:45 np0005596060 podman[310532]: 2026-01-26 18:47:45.106156612 +0000 UTC m=+1.010762974 container remove 24ee3846c60a477dc691ef18011677e89c7481521f078d0cf2745dcc3ecd9c48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_jackson, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 26 13:47:45 np0005596060 systemd[1]: libpod-conmon-24ee3846c60a477dc691ef18011677e89c7481521f078d0cf2745dcc3ecd9c48.scope: Deactivated successfully.
Jan 26 13:47:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:47:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:45.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:47:45 np0005596060 podman[310710]: 2026-01-26 18:47:45.669310823 +0000 UTC m=+0.037456298 container create 62346d805420c59a64af560c380746eb92d9a366dd16f6586729b856f0cd91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 13:47:45 np0005596060 systemd[1]: Started libpod-conmon-62346d805420c59a64af560c380746eb92d9a366dd16f6586729b856f0cd91cb.scope.
Jan 26 13:47:45 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:47:45 np0005596060 podman[310710]: 2026-01-26 18:47:45.746595267 +0000 UTC m=+0.114740752 container init 62346d805420c59a64af560c380746eb92d9a366dd16f6586729b856f0cd91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_meninsky, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:47:45 np0005596060 podman[310710]: 2026-01-26 18:47:45.653685753 +0000 UTC m=+0.021831258 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:47:45 np0005596060 podman[310710]: 2026-01-26 18:47:45.760348731 +0000 UTC m=+0.128494226 container start 62346d805420c59a64af560c380746eb92d9a366dd16f6586729b856f0cd91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_meninsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:47:45 np0005596060 podman[310710]: 2026-01-26 18:47:45.76392289 +0000 UTC m=+0.132068415 container attach 62346d805420c59a64af560c380746eb92d9a366dd16f6586729b856f0cd91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_meninsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:47:45 np0005596060 clever_meninsky[310726]: 167 167
Jan 26 13:47:45 np0005596060 systemd[1]: libpod-62346d805420c59a64af560c380746eb92d9a366dd16f6586729b856f0cd91cb.scope: Deactivated successfully.
Jan 26 13:47:45 np0005596060 conmon[310726]: conmon 62346d805420c59a64af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62346d805420c59a64af560c380746eb92d9a366dd16f6586729b856f0cd91cb.scope/container/memory.events
Jan 26 13:47:45 np0005596060 podman[310731]: 2026-01-26 18:47:45.819348717 +0000 UTC m=+0.027861498 container died 62346d805420c59a64af560c380746eb92d9a366dd16f6586729b856f0cd91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:47:45 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ff3e1c86a4a27ad62fc0feb1fafec7d92cb895ec7541effc05ba7e7674662ea3-merged.mount: Deactivated successfully.
Jan 26 13:47:45 np0005596060 podman[310731]: 2026-01-26 18:47:45.857559443 +0000 UTC m=+0.066072224 container remove 62346d805420c59a64af560c380746eb92d9a366dd16f6586729b856f0cd91cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 13:47:45 np0005596060 systemd[1]: libpod-conmon-62346d805420c59a64af560c380746eb92d9a366dd16f6586729b856f0cd91cb.scope: Deactivated successfully.
Jan 26 13:47:46 np0005596060 podman[310800]: 2026-01-26 18:47:46.023329411 +0000 UTC m=+0.039179121 container create 761c27f6b214757f3b33afc7d42882cb5a183a243309a4d847aa1aa8a027cc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 13:47:46 np0005596060 systemd[1]: Started libpod-conmon-761c27f6b214757f3b33afc7d42882cb5a183a243309a4d847aa1aa8a027cc02.scope.
Jan 26 13:47:46 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:47:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9ad4b8e96ad918164a3963c3535738adca8d9281511274a73c8882ff0bc336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:46 np0005596060 podman[310800]: 2026-01-26 18:47:46.005384502 +0000 UTC m=+0.021234252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:47:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9ad4b8e96ad918164a3963c3535738adca8d9281511274a73c8882ff0bc336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9ad4b8e96ad918164a3963c3535738adca8d9281511274a73c8882ff0bc336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:46 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9ad4b8e96ad918164a3963c3535738adca8d9281511274a73c8882ff0bc336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:47:46 np0005596060 podman[310800]: 2026-01-26 18:47:46.118503923 +0000 UTC m=+0.134353643 container init 761c27f6b214757f3b33afc7d42882cb5a183a243309a4d847aa1aa8a027cc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:47:46 np0005596060 podman[310800]: 2026-01-26 18:47:46.126298278 +0000 UTC m=+0.142147988 container start 761c27f6b214757f3b33afc7d42882cb5a183a243309a4d847aa1aa8a027cc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:47:46 np0005596060 podman[310800]: 2026-01-26 18:47:46.129148239 +0000 UTC m=+0.144997949 container attach 761c27f6b214757f3b33afc7d42882cb5a183a243309a4d847aa1aa8a027cc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:47:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:46.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:46 np0005596060 angry_visvesvaraya[310816]: {
Jan 26 13:47:46 np0005596060 angry_visvesvaraya[310816]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:47:46 np0005596060 angry_visvesvaraya[310816]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:47:46 np0005596060 angry_visvesvaraya[310816]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:47:46 np0005596060 angry_visvesvaraya[310816]:        "osd_id": 1,
Jan 26 13:47:46 np0005596060 angry_visvesvaraya[310816]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:47:46 np0005596060 angry_visvesvaraya[310816]:        "type": "bluestore"
Jan 26 13:47:46 np0005596060 angry_visvesvaraya[310816]:    }
Jan 26 13:47:46 np0005596060 angry_visvesvaraya[310816]: }
Jan 26 13:47:46 np0005596060 systemd[1]: libpod-761c27f6b214757f3b33afc7d42882cb5a183a243309a4d847aa1aa8a027cc02.scope: Deactivated successfully.
Jan 26 13:47:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:47 np0005596060 podman[310837]: 2026-01-26 18:47:47.028939295 +0000 UTC m=+0.023836608 container died 761c27f6b214757f3b33afc7d42882cb5a183a243309a4d847aa1aa8a027cc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 13:47:47 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1a9ad4b8e96ad918164a3963c3535738adca8d9281511274a73c8882ff0bc336-merged.mount: Deactivated successfully.
Jan 26 13:47:47 np0005596060 podman[310837]: 2026-01-26 18:47:47.075242863 +0000 UTC m=+0.070140156 container remove 761c27f6b214757f3b33afc7d42882cb5a183a243309a4d847aa1aa8a027cc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_visvesvaraya, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:47:47 np0005596060 systemd[1]: libpod-conmon-761c27f6b214757f3b33afc7d42882cb5a183a243309a4d847aa1aa8a027cc02.scope: Deactivated successfully.
Jan 26 13:47:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:47:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:47:47 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:47:47 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:47:47 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 93b39dc2-ddde-400c-961b-a5c0ce29d763 does not exist
Jan 26 13:47:47 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3cc21a70-4c30-49ef-be83-c34261d3ded7 does not exist
Jan 26 13:47:47 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d4b4706f-b732-4ba5-a723-d36b78fccf70 does not exist
Jan 26 13:47:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:47.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:47:48 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:47:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:48 np0005596060 nova_compute[247421]: 2026-01-26 18:47:48.787 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:49 np0005596060 nova_compute[247421]: 2026-01-26 18:47:49.176 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:49.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:50.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:51.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:52.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:53.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:53 np0005596060 nova_compute[247421]: 2026-01-26 18:47:53.789 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:54 np0005596060 nova_compute[247421]: 2026-01-26 18:47:54.177 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:54.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:47:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:47:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:55.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:47:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:56.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:47:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 255 B/s rd, 170 B/s wr, 0 op/s
Jan 26 13:47:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:57.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:47:58.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:58 np0005596060 nova_compute[247421]: 2026-01-26 18:47:58.671 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:47:58 np0005596060 nova_compute[247421]: 2026-01-26 18:47:58.671 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 26 13:47:58 np0005596060 nova_compute[247421]: 2026-01-26 18:47:58.689 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 26 13:47:58 np0005596060 nova_compute[247421]: 2026-01-26 18:47:58.831 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:58 np0005596060 nova_compute[247421]: 2026-01-26 18:47:58.981 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:47:58 np0005596060 nova_compute[247421]: 2026-01-26 18:47:58.982 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:47:58 np0005596060 nova_compute[247421]: 2026-01-26 18:47:58.996 247428 DEBUG nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 26 13:47:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.057 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.058 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.068 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.069 247428 INFO nova.compute.claims [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.168 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.196 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:47:59 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:47:59 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/767218183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.585 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.594 247428 DEBUG nova.compute.provider_tree [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.612 247428 DEBUG nova.scheduler.client.report [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:47:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:47:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:47:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:47:59.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.647 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.648 247428 DEBUG nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.708 247428 DEBUG nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.709 247428 DEBUG nova.network.neutron [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.737 247428 INFO nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.759 247428 DEBUG nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.869 247428 DEBUG nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.870 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.871 247428 INFO nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Creating image(s)#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.893 247428 DEBUG nova.storage.rbd_utils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] rbd image 4448620c-6ae7-4a36-98b9-cf616b071da7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.918 247428 DEBUG nova.storage.rbd_utils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] rbd image 4448620c-6ae7-4a36-98b9-cf616b071da7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.942 247428 DEBUG nova.storage.rbd_utils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] rbd image 4448620c-6ae7-4a36-98b9-cf616b071da7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:47:59 np0005596060 nova_compute[247421]: 2026-01-26 18:47:59.945 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.025 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.026 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "0e27310cde9db7031eb6052434134c1283ddf216" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.027 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.027 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "0e27310cde9db7031eb6052434134c1283ddf216" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.049 247428 DEBUG nova.storage.rbd_utils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] rbd image 4448620c-6ae7-4a36-98b9-cf616b071da7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.052 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 4448620c-6ae7-4a36-98b9-cf616b071da7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:00.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.319 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/0e27310cde9db7031eb6052434134c1283ddf216 4448620c-6ae7-4a36-98b9-cf616b071da7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.267s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.411 247428 DEBUG nova.storage.rbd_utils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] resizing rbd image 4448620c-6ae7-4a36-98b9-cf616b071da7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.520 247428 DEBUG nova.policy [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '607a744d16234868b129a11863dd5515', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd0d840e2f88d463da0429813ca3c3914', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.528 247428 DEBUG nova.objects.instance [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lazy-loading 'migration_context' on Instance uuid 4448620c-6ae7-4a36-98b9-cf616b071da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.542 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.542 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Ensure instance console log exists: /var/lib/nova/instances/4448620c-6ae7-4a36-98b9-cf616b071da7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.543 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.543 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:00 np0005596060 nova_compute[247421]: 2026-01-26 18:48:00.543 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:00 np0005596060 ovn_controller[148842]: 2026-01-26T18:48:00Z|00212|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Jan 26 13:48:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 426 B/s rd, 341 B/s wr, 0 op/s
Jan 26 13:48:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:01.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:01 np0005596060 nova_compute[247421]: 2026-01-26 18:48:01.969 247428 DEBUG nova.network.neutron [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Successfully created port: c81857f7-d034-41c1-8f0f-2d11c566b9fa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 26 13:48:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:02.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:02 np0005596060 nova_compute[247421]: 2026-01-26 18:48:02.962 247428 DEBUG nova.network.neutron [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Successfully updated port: c81857f7-d034-41c1-8f0f-2d11c566b9fa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 26 13:48:02 np0005596060 nova_compute[247421]: 2026-01-26 18:48:02.990 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:48:02 np0005596060 nova_compute[247421]: 2026-01-26 18:48:02.991 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquired lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:48:02 np0005596060 nova_compute[247421]: 2026-01-26 18:48:02.991 247428 DEBUG nova.network.neutron [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 26 13:48:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 13:48:03 np0005596060 nova_compute[247421]: 2026-01-26 18:48:03.088 247428 DEBUG nova.compute.manager [req-732fd745-d408-4caf-b6ac-06d9d1778c59 req-6f6bbd88-295a-48b3-bfeb-eebfc75a9ae2 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received event network-changed-c81857f7-d034-41c1-8f0f-2d11c566b9fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:48:03 np0005596060 nova_compute[247421]: 2026-01-26 18:48:03.089 247428 DEBUG nova.compute.manager [req-732fd745-d408-4caf-b6ac-06d9d1778c59 req-6f6bbd88-295a-48b3-bfeb-eebfc75a9ae2 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Refreshing instance network info cache due to event network-changed-c81857f7-d034-41c1-8f0f-2d11c566b9fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:48:03 np0005596060 nova_compute[247421]: 2026-01-26 18:48:03.089 247428 DEBUG oslo_concurrency.lockutils [req-732fd745-d408-4caf-b6ac-06d9d1778c59 req-6f6bbd88-295a-48b3-bfeb-eebfc75a9ae2 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:48:03 np0005596060 nova_compute[247421]: 2026-01-26 18:48:03.138 247428 DEBUG nova.network.neutron [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 26 13:48:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:03.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:03 np0005596060 nova_compute[247421]: 2026-01-26 18:48:03.834 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:48:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.199 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:04.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.676 247428 DEBUG nova.network.neutron [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updating instance_info_cache with network_info: [{"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.702 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Releasing lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.702 247428 DEBUG nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Instance network_info: |[{"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.702 247428 DEBUG oslo_concurrency.lockutils [req-732fd745-d408-4caf-b6ac-06d9d1778c59 req-6f6bbd88-295a-48b3-bfeb-eebfc75a9ae2 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.703 247428 DEBUG nova.network.neutron [req-732fd745-d408-4caf-b6ac-06d9d1778c59 req-6f6bbd88-295a-48b3-bfeb-eebfc75a9ae2 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Refreshing network info cache for port c81857f7-d034-41c1-8f0f-2d11c566b9fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.706 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Start _get_guest_xml network_info=[{"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'guest_format': None, 'device_type': 'disk', 'encryption_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'encryption_options': None, 'boot_index': 0, 'size': 0, 'image_id': '57de5960-c1c5-4cfa-af34-8f58cf25f585'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.711 247428 WARNING nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.716 247428 DEBUG nova.virt.libvirt.host [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.717 247428 DEBUG nova.virt.libvirt.host [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.724 247428 DEBUG nova.virt.libvirt.host [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.725 247428 DEBUG nova.virt.libvirt.host [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.726 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.726 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T18:05:20Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='c19d349c-ad8f-4453-bd9e-1248725b13ed',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T18:05:23Z,direct_url=<?>,disk_format='qcow2',id=57de5960-c1c5-4cfa-af34-8f58cf25f585,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='ce9c2caf475c4ad29ab1e03bc8886f7a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T18:05:28Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.727 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.727 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.727 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.727 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.727 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.728 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.728 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.728 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.728 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.729 247428 DEBUG nova.virt.hardware [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 26 13:48:04 np0005596060 nova_compute[247421]: 2026-01-26 18:48:04.731 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 13:48:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:48:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/522849684' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.160 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.189 247428 DEBUG nova.storage.rbd_utils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] rbd image 4448620c-6ae7-4a36-98b9-cf616b071da7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.194 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 26 13:48:05 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929539815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 26 13:48:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:05.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.646 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.649 247428 DEBUG nova.virt.libvirt.vif [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-853570668',display_name='tempest-TestStampPattern-server-853570668',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-853570668',id=31,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPlHELE9ANDmnjPtXRweIc6NLWB4tjRssupdTbbRXJphUqr4KnPaqzvrgCYinLJGLkYacL40FbC5LaSigcqHxaArN4zqgbgumBJ+u494ihSKQ6ae+o9uIVi5vvtty16tQ==',key_name='tempest-TestStampPattern-298146011',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0d840e2f88d463da0429813ca3c3914',ramdisk_id='',reservation_id='r-s501ti00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-200466580',owner_user_name='tempest-TestStampPattern-200466580-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:47:59Z,user_data=None,user_id='607a744d16234868b129a11863dd5515',uuid=4448620c-6ae7-4a36-98b9-cf616b071da7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.649 247428 DEBUG nova.network.os_vif_util [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Converting VIF {"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.650 247428 DEBUG nova.network.os_vif_util [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d6:09,bridge_name='br-int',has_traffic_filtering=True,id=c81857f7-d034-41c1-8f0f-2d11c566b9fa,network=Network(7028e9ff-4580-4927-a34a-bf2749f519c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc81857f7-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.651 247428 DEBUG nova.objects.instance [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4448620c-6ae7-4a36-98b9-cf616b071da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.667 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] End _get_guest_xml xml=<domain type="kvm">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <uuid>4448620c-6ae7-4a36-98b9-cf616b071da7</uuid>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <name>instance-0000001f</name>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <memory>131072</memory>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <vcpu>1</vcpu>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <metadata>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <nova:name>tempest-TestStampPattern-server-853570668</nova:name>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <nova:creationTime>2026-01-26 18:48:04</nova:creationTime>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <nova:flavor name="m1.nano">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <nova:memory>128</nova:memory>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <nova:disk>1</nova:disk>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <nova:swap>0</nova:swap>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <nova:ephemeral>0</nova:ephemeral>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <nova:vcpus>1</nova:vcpus>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      </nova:flavor>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <nova:owner>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <nova:user uuid="607a744d16234868b129a11863dd5515">tempest-TestStampPattern-200466580-project-member</nova:user>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <nova:project uuid="d0d840e2f88d463da0429813ca3c3914">tempest-TestStampPattern-200466580</nova:project>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      </nova:owner>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <nova:root type="image" uuid="57de5960-c1c5-4cfa-af34-8f58cf25f585"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <nova:ports>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <nova:port uuid="c81857f7-d034-41c1-8f0f-2d11c566b9fa">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        </nova:port>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      </nova:ports>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    </nova:instance>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  </metadata>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <sysinfo type="smbios">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <system>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <entry name="manufacturer">RDO</entry>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <entry name="product">OpenStack Compute</entry>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <entry name="serial">4448620c-6ae7-4a36-98b9-cf616b071da7</entry>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <entry name="uuid">4448620c-6ae7-4a36-98b9-cf616b071da7</entry>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <entry name="family">Virtual Machine</entry>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    </system>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  </sysinfo>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <os>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <boot dev="hd"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <smbios mode="sysinfo"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  </os>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <features>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <acpi/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <apic/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <vmcoreinfo/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  </features>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <clock offset="utc">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <timer name="pit" tickpolicy="delay"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <timer name="hpet" present="no"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  </clock>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <cpu mode="custom" match="exact">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <model>Nehalem</model>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <topology sockets="1" cores="1" threads="1"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  </cpu>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  <devices>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <disk type="network" device="disk">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/4448620c-6ae7-4a36-98b9-cf616b071da7_disk">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <target dev="vda" bus="virtio"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <disk type="network" device="cdrom">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <driver type="raw" cache="none"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <source protocol="rbd" name="vms/4448620c-6ae7-4a36-98b9-cf616b071da7_disk.config">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <host name="192.168.122.100" port="6789"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <host name="192.168.122.102" port="6789"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <host name="192.168.122.101" port="6789"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      </source>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <auth username="openstack">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:        <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      </auth>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <target dev="sda" bus="sata"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    </disk>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <interface type="ethernet">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <mac address="fa:16:3e:3f:d6:09"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <driver name="vhost" rx_queue_size="512"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <mtu size="1442"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <target dev="tapc81857f7-d0"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    </interface>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <serial type="pty">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <log file="/var/lib/nova/instances/4448620c-6ae7-4a36-98b9-cf616b071da7/console.log" append="off"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    </serial>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <video>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <model type="virtio"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    </video>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <input type="tablet" bus="usb"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <rng model="virtio">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <backend model="random">/dev/urandom</backend>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    </rng>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="pci" model="pcie-root-port"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <controller type="usb" index="0"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    <memballoon model="virtio">
Jan 26 13:48:05 np0005596060 nova_compute[247421]:      <stats period="10"/>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:    </memballoon>
Jan 26 13:48:05 np0005596060 nova_compute[247421]:  </devices>
Jan 26 13:48:05 np0005596060 nova_compute[247421]: </domain>
Jan 26 13:48:05 np0005596060 nova_compute[247421]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.669 247428 DEBUG nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Preparing to wait for external event network-vif-plugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.669 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.670 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.670 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.672 247428 DEBUG nova.virt.libvirt.vif [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T18:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-853570668',display_name='tempest-TestStampPattern-server-853570668',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-853570668',id=31,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPlHELE9ANDmnjPtXRweIc6NLWB4tjRssupdTbbRXJphUqr4KnPaqzvrgCYinLJGLkYacL40FbC5LaSigcqHxaArN4zqgbgumBJ+u494ihSKQ6ae+o9uIVi5vvtty16tQ==',key_name='tempest-TestStampPattern-298146011',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d0d840e2f88d463da0429813ca3c3914',ramdisk_id='',reservation_id='r-s501ti00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-200466580',owner_user_name='tempest-TestStampPattern-200466580-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T18:47:59Z,user_data=None,user_id='607a744d16234868b129a11863dd5515',uuid=4448620c-6ae7-4a36-98b9-cf616b071da7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.673 247428 DEBUG nova.network.os_vif_util [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Converting VIF {"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.674 247428 DEBUG nova.network.os_vif_util [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d6:09,bridge_name='br-int',has_traffic_filtering=True,id=c81857f7-d034-41c1-8f0f-2d11c566b9fa,network=Network(7028e9ff-4580-4927-a34a-bf2749f519c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc81857f7-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.675 247428 DEBUG os_vif [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d6:09,bridge_name='br-int',has_traffic_filtering=True,id=c81857f7-d034-41c1-8f0f-2d11c566b9fa,network=Network(7028e9ff-4580-4927-a34a-bf2749f519c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc81857f7-d0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.676 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.677 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.678 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.685 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.686 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc81857f7-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.687 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc81857f7-d0, col_values=(('external_ids', {'iface-id': 'c81857f7-d034-41c1-8f0f-2d11c566b9fa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:d6:09', 'vm-uuid': '4448620c-6ae7-4a36-98b9-cf616b071da7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.690 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:05 np0005596060 NetworkManager[48900]: <info>  [1769453285.6921] manager: (tapc81857f7-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.693 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.698 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.699 247428 INFO os_vif [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:d6:09,bridge_name='br-int',has_traffic_filtering=True,id=c81857f7-d034-41c1-8f0f-2d11c566b9fa,network=Network(7028e9ff-4580-4927-a34a-bf2749f519c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc81857f7-d0')#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.780 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.780 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.780 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] No VIF found with MAC fa:16:3e:3f:d6:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.781 247428 INFO nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Using config drive#033[00m
Jan 26 13:48:05 np0005596060 nova_compute[247421]: 2026-01-26 18:48:05.806 247428 DEBUG nova.storage.rbd_utils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] rbd image 4448620c-6ae7-4a36-98b9-cf616b071da7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.151 247428 INFO nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Creating config drive at /var/lib/nova/instances/4448620c-6ae7-4a36-98b9-cf616b071da7/disk.config#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.158 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4448620c-6ae7-4a36-98b9-cf616b071da7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0_hi_o7z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:06.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.301 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4448620c-6ae7-4a36-98b9-cf616b071da7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0_hi_o7z" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.327 247428 DEBUG nova.storage.rbd_utils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] rbd image 4448620c-6ae7-4a36-98b9-cf616b071da7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.331 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4448620c-6ae7-4a36-98b9-cf616b071da7/disk.config 4448620c-6ae7-4a36-98b9-cf616b071da7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.497 247428 DEBUG nova.network.neutron [req-732fd745-d408-4caf-b6ac-06d9d1778c59 req-6f6bbd88-295a-48b3-bfeb-eebfc75a9ae2 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updated VIF entry in instance network info cache for port c81857f7-d034-41c1-8f0f-2d11c566b9fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.498 247428 DEBUG nova.network.neutron [req-732fd745-d408-4caf-b6ac-06d9d1778c59 req-6f6bbd88-295a-48b3-bfeb-eebfc75a9ae2 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updating instance_info_cache with network_info: [{"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.512 247428 DEBUG oslo_concurrency.lockutils [req-732fd745-d408-4caf-b6ac-06d9d1778c59 req-6f6bbd88-295a-48b3-bfeb-eebfc75a9ae2 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.544 247428 DEBUG oslo_concurrency.processutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4448620c-6ae7-4a36-98b9-cf616b071da7/disk.config 4448620c-6ae7-4a36-98b9-cf616b071da7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.213s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.545 247428 INFO nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Deleting local config drive /var/lib/nova/instances/4448620c-6ae7-4a36-98b9-cf616b071da7/disk.config because it was imported into RBD.#033[00m
Jan 26 13:48:06 np0005596060 NetworkManager[48900]: <info>  [1769453286.6139] manager: (tapc81857f7-d0): new Tun device (/org/freedesktop/NetworkManager/Devices/110)
Jan 26 13:48:06 np0005596060 kernel: tapc81857f7-d0: entered promiscuous mode
Jan 26 13:48:06 np0005596060 ovn_controller[148842]: 2026-01-26T18:48:06Z|00213|binding|INFO|Claiming lport c81857f7-d034-41c1-8f0f-2d11c566b9fa for this chassis.
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.616 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:06 np0005596060 ovn_controller[148842]: 2026-01-26T18:48:06Z|00214|binding|INFO|c81857f7-d034-41c1-8f0f-2d11c566b9fa: Claiming fa:16:3e:3f:d6:09 10.100.0.7
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.620 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.622 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.630 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:d6:09 10.100.0.7'], port_security=['fa:16:3e:3f:d6:09 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4448620c-6ae7-4a36-98b9-cf616b071da7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7028e9ff-4580-4927-a34a-bf2749f519c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0d840e2f88d463da0429813ca3c3914', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e4f45d30-450b-4f84-9f37-19af8e10da2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16f45680-7acc-4bd4-acd3-31941d09daad, chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=c81857f7-d034-41c1-8f0f-2d11c566b9fa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.631 159331 INFO neutron.agent.ovn.metadata.agent [-] Port c81857f7-d034-41c1-8f0f-2d11c566b9fa in datapath 7028e9ff-4580-4927-a34a-bf2749f519c0 bound to our chassis#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.633 159331 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7028e9ff-4580-4927-a34a-bf2749f519c0#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.651 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[f8603b2d-f317-4a79-9f2a-15d86f4ade3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.652 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7028e9ff-41 in ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 26 13:48:06 np0005596060 systemd-machined[213879]: New machine qemu-18-instance-0000001f.
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.654 253549 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7028e9ff-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.655 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[53dae4ea-c40f-4039-a462-b93613d36de1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.656 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae70a90-9a73-4847-90ea-1e356beeba5f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.675 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[0519154e-7ea7-47ce-8433-8983847d3eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.695 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:06 np0005596060 systemd[1]: Started Virtual Machine qemu-18-instance-0000001f.
Jan 26 13:48:06 np0005596060 ovn_controller[148842]: 2026-01-26T18:48:06Z|00215|binding|INFO|Setting lport c81857f7-d034-41c1-8f0f-2d11c566b9fa ovn-installed in OVS
Jan 26 13:48:06 np0005596060 ovn_controller[148842]: 2026-01-26T18:48:06Z|00216|binding|INFO|Setting lport c81857f7-d034-41c1-8f0f-2d11c566b9fa up in Southbound
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.700 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.707 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8fe6b0-502d-403b-b9eb-0baa000b2825]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 systemd-udevd[311287]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:48:06 np0005596060 NetworkManager[48900]: <info>  [1769453286.7307] device (tapc81857f7-d0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 13:48:06 np0005596060 NetworkManager[48900]: <info>  [1769453286.7316] device (tapc81857f7-d0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.744 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf9b03a-e55e-48a1-ab46-9091ce28fe04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.749 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[def24dc3-5dbd-405d-aec9-bc995d53c35e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 NetworkManager[48900]: <info>  [1769453286.7501] manager: (tap7028e9ff-40): new Veth device (/org/freedesktop/NetworkManager/Devices/111)
Jan 26 13:48:06 np0005596060 systemd-udevd[311291]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.782 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[357eb28d-ddc6-4e04-b7aa-3d39108b2646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.785 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd36183-668f-48b3-bbb7-1897ce772b27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 NetworkManager[48900]: <info>  [1769453286.8102] device (tap7028e9ff-40): carrier: link connected
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.824 253606 DEBUG oslo.privsep.daemon [-] privsep: reply[bf627b86-d3f2-43d5-b4d0-90e5c26ff284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.843 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1c3750-df0b-447a-b297-a91359f7b4d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7028e9ff-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:b5:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705139, 'reachable_time': 40600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311317, 'error': None, 'target': 'ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.863 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[513391c1-d3b1-4d9c-babf-b8c0c6c4fb01]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed0:b523'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 705139, 'tstamp': 705139}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311318, 'error': None, 'target': 'ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.880 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[65fc25e4-240e-455b-911f-cc50ccc576d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7028e9ff-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d0:b5:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 61], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705139, 'reachable_time': 40600, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311319, 'error': None, 'target': 'ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.915 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c720d88f-3e1e-4b6e-af76-1c5016bf7ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.923 247428 DEBUG nova.compute.manager [req-c6daf77e-250f-43a3-86de-c8d50a90f091 req-e1f24b78-920c-4b02-b9ef-a2f1397afd37 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received event network-vif-plugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.923 247428 DEBUG oslo_concurrency.lockutils [req-c6daf77e-250f-43a3-86de-c8d50a90f091 req-e1f24b78-920c-4b02-b9ef-a2f1397afd37 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.924 247428 DEBUG oslo_concurrency.lockutils [req-c6daf77e-250f-43a3-86de-c8d50a90f091 req-e1f24b78-920c-4b02-b9ef-a2f1397afd37 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.924 247428 DEBUG oslo_concurrency.lockutils [req-c6daf77e-250f-43a3-86de-c8d50a90f091 req-e1f24b78-920c-4b02-b9ef-a2f1397afd37 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.924 247428 DEBUG nova.compute.manager [req-c6daf77e-250f-43a3-86de-c8d50a90f091 req-e1f24b78-920c-4b02-b9ef-a2f1397afd37 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Processing event network-vif-plugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.986 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[1cbe5b5d-03ce-472d-a5b3-65b57cb3f016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.988 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7028e9ff-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.988 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.990 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7028e9ff-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.993 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:06 np0005596060 NetworkManager[48900]: <info>  [1769453286.9940] manager: (tap7028e9ff-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Jan 26 13:48:06 np0005596060 kernel: tap7028e9ff-40: entered promiscuous mode
Jan 26 13:48:06 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:06.997 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7028e9ff-40, col_values=(('external_ids', {'iface-id': '346fccd6-0c61-47af-adf2-0479f55b1687'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:48:06 np0005596060 ovn_controller[148842]: 2026-01-26T18:48:06Z|00217|binding|INFO|Releasing lport 346fccd6-0c61-47af-adf2-0479f55b1687 from this chassis (sb_readonly=0)
Jan 26 13:48:06 np0005596060 nova_compute[247421]: 2026-01-26 18:48:06.999 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:07.000 159331 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7028e9ff-4580-4927-a34a-bf2749f519c0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7028e9ff-4580-4927-a34a-bf2749f519c0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:07.001 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f9236a-c8c5-4462-93f7-1ba7615ee26d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:07.001 159331 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: global
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    log         /dev/log local0 debug
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    log-tag     haproxy-metadata-proxy-7028e9ff-4580-4927-a34a-bf2749f519c0
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    user        root
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    group       root
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    maxconn     1024
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    pidfile     /var/lib/neutron/external/pids/7028e9ff-4580-4927-a34a-bf2749f519c0.pid.haproxy
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    daemon
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: defaults
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    log global
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    mode http
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    option httplog
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    option dontlognull
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    option http-server-close
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    option forwardfor
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    retries                 3
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    timeout http-request    30s
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    timeout connect         30s
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    timeout client          32s
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    timeout server          32s
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    timeout http-keep-alive 30s
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: 
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: listen listener
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    bind 169.254.169.254:80
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    server metadata /var/lib/neutron/metadata_proxy
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]:    http-request add-header X-OVN-Network-ID 7028e9ff-4580-4927-a34a-bf2749f519c0
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 26 13:48:07 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:07.002 159331 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0', 'env', 'PROCESS_TAG=haproxy-7028e9ff-4580-4927-a34a-bf2749f519c0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7028e9ff-4580-4927-a34a-bf2749f519c0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.013 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.380 247428 DEBUG nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 26 13:48:07 np0005596060 podman[311392]: 2026-01-26 18:48:07.38068415 +0000 UTC m=+0.051176622 container create cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.381 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769453287.3794749, 4448620c-6ae7-4a36-98b9-cf616b071da7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.382 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] VM Started (Lifecycle Event)#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.387 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.392 247428 INFO nova.virt.libvirt.driver [-] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Instance spawned successfully.#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.392 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.411 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.417 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:48:07 np0005596060 systemd[1]: Started libpod-conmon-cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8.scope.
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.420 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.420 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.421 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.421 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.421 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.422 247428 DEBUG nova.virt.libvirt.driver [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 26 13:48:07 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.446 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.446 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769453287.3808267, 4448620c-6ae7-4a36-98b9-cf616b071da7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.446 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] VM Paused (Lifecycle Event)#033[00m
Jan 26 13:48:07 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecd0f407e085efffa09cf99af919136a49fa5fc7c7c896f8451e31ba79ab57f7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:07 np0005596060 podman[311392]: 2026-01-26 18:48:07.354083544 +0000 UTC m=+0.024576066 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 13:48:07 np0005596060 podman[311392]: 2026-01-26 18:48:07.457943723 +0000 UTC m=+0.128436205 container init cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:48:07 np0005596060 podman[311392]: 2026-01-26 18:48:07.462711112 +0000 UTC m=+0.133203584 container start cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.483 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:48:07 np0005596060 neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0[311408]: [NOTICE]   (311412) : New worker (311414) forked
Jan 26 13:48:07 np0005596060 neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0[311408]: [NOTICE]   (311412) : Loading success.
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.487 247428 DEBUG nova.virt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Emitting event <LifecycleEvent: 1769453287.385214, 4448620c-6ae7-4a36-98b9-cf616b071da7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.487 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] VM Resumed (Lifecycle Event)#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.498 247428 INFO nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Took 7.63 seconds to spawn the instance on the hypervisor.#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.498 247428 DEBUG nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.530 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.534 247428 DEBUG nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.558 247428 INFO nova.compute.manager [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.573 247428 INFO nova.compute.manager [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Took 8.54 seconds to build instance.#033[00m
Jan 26 13:48:07 np0005596060 nova_compute[247421]: 2026-01-26 18:48:07.590 247428 DEBUG oslo_concurrency.lockutils [None req-aae7d8a0-af18-467d-a85f-b60b2e95bb45 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:07.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:08 np0005596060 nova_compute[247421]: 2026-01-26 18:48:08.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:08 np0005596060 nova_compute[247421]: 2026-01-26 18:48:08.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 26 13:48:08 np0005596060 nova_compute[247421]: 2026-01-26 18:48:08.885 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:08 np0005596060 nova_compute[247421]: 2026-01-26 18:48:08.989 247428 DEBUG nova.compute.manager [req-2662b234-7e33-45e9-9afc-f2581760c17a req-3b199ce8-30d3-46bf-acca-08f52cac99ee 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received event network-vif-plugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:48:08 np0005596060 nova_compute[247421]: 2026-01-26 18:48:08.989 247428 DEBUG oslo_concurrency.lockutils [req-2662b234-7e33-45e9-9afc-f2581760c17a req-3b199ce8-30d3-46bf-acca-08f52cac99ee 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:08 np0005596060 nova_compute[247421]: 2026-01-26 18:48:08.989 247428 DEBUG oslo_concurrency.lockutils [req-2662b234-7e33-45e9-9afc-f2581760c17a req-3b199ce8-30d3-46bf-acca-08f52cac99ee 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:08 np0005596060 nova_compute[247421]: 2026-01-26 18:48:08.989 247428 DEBUG oslo_concurrency.lockutils [req-2662b234-7e33-45e9-9afc-f2581760c17a req-3b199ce8-30d3-46bf-acca-08f52cac99ee 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:08 np0005596060 nova_compute[247421]: 2026-01-26 18:48:08.990 247428 DEBUG nova.compute.manager [req-2662b234-7e33-45e9-9afc-f2581760c17a req-3b199ce8-30d3-46bf-acca-08f52cac99ee 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] No waiting events found dispatching network-vif-plugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:48:08 np0005596060 nova_compute[247421]: 2026-01-26 18:48:08.990 247428 WARNING nova.compute.manager [req-2662b234-7e33-45e9-9afc-f2581760c17a req-3b199ce8-30d3-46bf-acca-08f52cac99ee 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received unexpected event network-vif-plugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa for instance with vm_state active and task_state None.#033[00m
Jan 26 13:48:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 26 13:48:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:09.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:10.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:10 np0005596060 nova_compute[247421]: 2026-01-26 18:48:10.343 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:10 np0005596060 NetworkManager[48900]: <info>  [1769453290.3443] manager: (patch-br-int-to-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Jan 26 13:48:10 np0005596060 NetworkManager[48900]: <info>  [1769453290.3462] manager: (patch-provnet-7e8d8b01-8f69-4c2f-9ca3-c7f2a9ff632c-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Jan 26 13:48:10 np0005596060 ovn_controller[148842]: 2026-01-26T18:48:10Z|00218|binding|INFO|Releasing lport 346fccd6-0c61-47af-adf2-0479f55b1687 from this chassis (sb_readonly=0)
Jan 26 13:48:10 np0005596060 nova_compute[247421]: 2026-01-26 18:48:10.425 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:10 np0005596060 nova_compute[247421]: 2026-01-26 18:48:10.428 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:10 np0005596060 nova_compute[247421]: 2026-01-26 18:48:10.640 247428 DEBUG nova.compute.manager [req-4874f496-f099-4b16-879d-753dba5f1c06 req-f6db6f50-dc9a-480b-acf6-50b12cd5da80 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received event network-changed-c81857f7-d034-41c1-8f0f-2d11c566b9fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:48:10 np0005596060 nova_compute[247421]: 2026-01-26 18:48:10.641 247428 DEBUG nova.compute.manager [req-4874f496-f099-4b16-879d-753dba5f1c06 req-f6db6f50-dc9a-480b-acf6-50b12cd5da80 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Refreshing instance network info cache due to event network-changed-c81857f7-d034-41c1-8f0f-2d11c566b9fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:48:10 np0005596060 nova_compute[247421]: 2026-01-26 18:48:10.641 247428 DEBUG oslo_concurrency.lockutils [req-4874f496-f099-4b16-879d-753dba5f1c06 req-f6db6f50-dc9a-480b-acf6-50b12cd5da80 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:48:10 np0005596060 nova_compute[247421]: 2026-01-26 18:48:10.641 247428 DEBUG oslo_concurrency.lockutils [req-4874f496-f099-4b16-879d-753dba5f1c06 req-f6db6f50-dc9a-480b-acf6-50b12cd5da80 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:48:10 np0005596060 nova_compute[247421]: 2026-01-26 18:48:10.641 247428 DEBUG nova.network.neutron [req-4874f496-f099-4b16-879d-753dba5f1c06 req-f6db6f50-dc9a-480b-acf6-50b12cd5da80 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Refreshing network info cache for port c81857f7-d034-41c1-8f0f-2d11c566b9fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:48:10 np0005596060 nova_compute[247421]: 2026-01-26 18:48:10.691 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:10 np0005596060 podman[311426]: 2026-01-26 18:48:10.818156995 +0000 UTC m=+0.069379497 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 13:48:10 np0005596060 podman[311427]: 2026-01-26 18:48:10.84834415 +0000 UTC m=+0.099567592 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 13:48:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 26 13:48:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:11.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:11 np0005596060 nova_compute[247421]: 2026-01-26 18:48:11.812 247428 DEBUG nova.network.neutron [req-4874f496-f099-4b16-879d-753dba5f1c06 req-f6db6f50-dc9a-480b-acf6-50b12cd5da80 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updated VIF entry in instance network info cache for port c81857f7-d034-41c1-8f0f-2d11c566b9fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:48:11 np0005596060 nova_compute[247421]: 2026-01-26 18:48:11.813 247428 DEBUG nova.network.neutron [req-4874f496-f099-4b16-879d-753dba5f1c06 req-f6db6f50-dc9a-480b-acf6-50b12cd5da80 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updating instance_info_cache with network_info: [{"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:48:11 np0005596060 nova_compute[247421]: 2026-01-26 18:48:11.830 247428 DEBUG oslo_concurrency.lockutils [req-4874f496-f099-4b16-879d-753dba5f1c06 req-f6db6f50-dc9a-480b-acf6-50b12cd5da80 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:48:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:12.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 13:48:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:13.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:13 np0005596060 nova_compute[247421]: 2026-01-26 18:48:13.887 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:48:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:48:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:14.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:14.775 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:14.776 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:48:14.777 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:48:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:15.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:15 np0005596060 nova_compute[247421]: 2026-01-26 18:48:15.694 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:16.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:48:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:17.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:18.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:18 np0005596060 nova_compute[247421]: 2026-01-26 18:48:18.889 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 13:48:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:19.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:20.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:20 np0005596060 ovn_controller[148842]: 2026-01-26T18:48:20Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:d6:09 10.100.0.7
Jan 26 13:48:20 np0005596060 ovn_controller[148842]: 2026-01-26T18:48:20Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:d6:09 10.100.0.7
Jan 26 13:48:20 np0005596060 nova_compute[247421]: 2026-01-26 18:48:20.698 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 88 MiB data, 394 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 op/s
Jan 26 13:48:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:21.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:22.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 26 13:48:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:23.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:23 np0005596060 nova_compute[247421]: 2026-01-26 18:48:23.892 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:24.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 13:48:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:25.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:25 np0005596060 nova_compute[247421]: 2026-01-26 18:48:25.709 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:26.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:26 np0005596060 nova_compute[247421]: 2026-01-26 18:48:26.670 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:26 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Jan 26 13:48:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:26.990845) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:48:26 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Jan 26 13:48:26 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453306990914, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1262, "num_deletes": 253, "total_data_size": 2034940, "memory_usage": 2061048, "flush_reason": "Manual Compaction"}
Jan 26 13:48:26 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Jan 26 13:48:26 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453306999085, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 1235690, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48295, "largest_seqno": 49556, "table_properties": {"data_size": 1230963, "index_size": 2123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12617, "raw_average_key_size": 21, "raw_value_size": 1220588, "raw_average_value_size": 2051, "num_data_blocks": 95, "num_entries": 595, "num_filter_entries": 595, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769453194, "oldest_key_time": 1769453194, "file_creation_time": 1769453306, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:48:26 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 8278 microseconds, and 3810 cpu microseconds.
Jan 26 13:48:26 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:26.999132) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 1235690 bytes OK
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:26.999147) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.001151) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.001195) EVENT_LOG_v1 {"time_micros": 1769453307001163, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.001212) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2029369, prev total WAL file size 2029369, number of live WAL files 2.
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.002268) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373535' seq:72057594037927935, type:22 .. '6D6772737461740032303036' seq:0, type:0; will stop at (end)
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(1206KB)], [107(11MB)]
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453307002344, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 13374698, "oldest_snapshot_seqno": -1}
Jan 26 13:48:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 7417 keys, 10286510 bytes, temperature: kUnknown
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453307068642, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10286510, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10239928, "index_size": 26950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18565, "raw_key_size": 191376, "raw_average_key_size": 25, "raw_value_size": 10109938, "raw_average_value_size": 1363, "num_data_blocks": 1071, "num_entries": 7417, "num_filter_entries": 7417, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769453307, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.068881) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10286510 bytes
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.070239) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 201.5 rd, 154.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(19.1) write-amplify(8.3) OK, records in: 7890, records dropped: 473 output_compression: NoCompression
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.070255) EVENT_LOG_v1 {"time_micros": 1769453307070247, "job": 64, "event": "compaction_finished", "compaction_time_micros": 66390, "compaction_time_cpu_micros": 32266, "output_level": 6, "num_output_files": 1, "total_output_size": 10286510, "num_input_records": 7890, "num_output_records": 7417, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453307070636, "job": 64, "event": "table_file_deletion", "file_number": 109}
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453307072640, "job": 64, "event": "table_file_deletion", "file_number": 107}
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.002201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.072718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.072722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.072723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.072725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:48:27 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:48:27.072727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:48:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:27.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:28.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:28 np0005596060 nova_compute[247421]: 2026-01-26 18:48:28.893 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:48:29 np0005596060 nova_compute[247421]: 2026-01-26 18:48:29.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:29.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:30.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:30 np0005596060 nova_compute[247421]: 2026-01-26 18:48:30.713 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:48:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:31.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:32.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:32 np0005596060 nova_compute[247421]: 2026-01-26 18:48:32.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 13:48:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:33.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:33 np0005596060 nova_compute[247421]: 2026-01-26 18:48:33.894 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:48:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:34.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:48:34 np0005596060 nova_compute[247421]: 2026-01-26 18:48:34.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:34 np0005596060 nova_compute[247421]: 2026-01-26 18:48:34.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:48:34 np0005596060 nova_compute[247421]: 2026-01-26 18:48:34.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:48:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Jan 26 13:48:35 np0005596060 nova_compute[247421]: 2026-01-26 18:48:35.653 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:48:35 np0005596060 nova_compute[247421]: 2026-01-26 18:48:35.653 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquired lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:48:35 np0005596060 nova_compute[247421]: 2026-01-26 18:48:35.653 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 26 13:48:35 np0005596060 nova_compute[247421]: 2026-01-26 18:48:35.653 247428 DEBUG nova.objects.instance [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4448620c-6ae7-4a36-98b9-cf616b071da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:48:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:35.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:35 np0005596060 nova_compute[247421]: 2026-01-26 18:48:35.716 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:36.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.756 247428 DEBUG nova.network.neutron [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updating instance_info_cache with network_info: [{"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.830 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Releasing lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.831 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.831 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.832 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.832 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.832 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.854 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.855 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.855 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.855 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:48:36 np0005596060 nova_compute[247421]: 2026-01-26 18:48:36.856 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 26 13:48:37 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:48:37 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2160540207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.288 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.558 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.559 247428 DEBUG nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 26 13:48:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:48:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:37.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.736 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.738 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4385MB free_disk=20.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.738 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.738 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.942 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Instance 4448620c-6ae7-4a36-98b9-cf616b071da7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.943 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.944 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:48:37 np0005596060 nova_compute[247421]: 2026-01-26 18:48:37.995 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing inventories for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.016 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating ProviderTree inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.016 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.032 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing aggregate associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.051 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing trait associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.101 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:38.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:48:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3167100166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.521 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.528 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.544 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.563 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.564 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:38 np0005596060 nova_compute[247421]: 2026-01-26 18:48:38.895 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Jan 26 13:48:39 np0005596060 nova_compute[247421]: 2026-01-26 18:48:39.382 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:39 np0005596060 nova_compute[247421]: 2026-01-26 18:48:39.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:39.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:40.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:40 np0005596060 ovn_controller[148842]: 2026-01-26T18:48:40Z|00219|memory_trim|INFO|Detected inactivity (last active 30022 ms ago): trimming memory
Jan 26 13:48:40 np0005596060 nova_compute[247421]: 2026-01-26 18:48:40.719 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 26 13:48:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:41.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:41 np0005596060 podman[311580]: 2026-01-26 18:48:41.788148669 +0000 UTC m=+0.054603828 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 13:48:41 np0005596060 podman[311581]: 2026-01-26 18:48:41.815429701 +0000 UTC m=+0.080435434 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 13:48:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:42.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:42 np0005596060 nova_compute[247421]: 2026-01-26 18:48:42.772 247428 DEBUG oslo_concurrency.lockutils [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:42 np0005596060 nova_compute[247421]: 2026-01-26 18:48:42.773 247428 DEBUG oslo_concurrency.lockutils [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:42 np0005596060 nova_compute[247421]: 2026-01-26 18:48:42.794 247428 DEBUG nova.objects.instance [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lazy-loading 'flavor' on Instance uuid 4448620c-6ae7-4a36-98b9-cf616b071da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:48:42 np0005596060 nova_compute[247421]: 2026-01-26 18:48:42.836 247428 DEBUG oslo_concurrency.lockutils [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.112 247428 DEBUG oslo_concurrency.lockutils [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.113 247428 DEBUG oslo_concurrency.lockutils [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.113 247428 INFO nova.compute.manager [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Attaching volume d2427b35-a448-4de9-82a6-1f436efa15f7 to /dev/vdb#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.247 247428 DEBUG os_brick.utils [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.250 257571 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.264 257571 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.265 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[50d171af-256b-4e7a-841d-a9bc5e926f00]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.267 257571 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.276 257571 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.277 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[1b81dea4-9a09-40ca-a4ff-01bbcc7c057c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:14cb718ec160', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.279 257571 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.291 257571 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.291 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c98340-513e-4c4f-8e32-88dea9338db3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.293 257571 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e87c0c-ac8b-43f9-aec2-50649dd53a9a]: (4, 'd27b7a41-30de-40e4-9f10-b4e4f5902919') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.294 247428 DEBUG oslo_concurrency.processutils [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.322 247428 DEBUG oslo_concurrency.processutils [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.325 247428 DEBUG os_brick.initiator.connectors.lightos [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.326 247428 DEBUG os_brick.initiator.connectors.lightos [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.326 247428 DEBUG os_brick.initiator.connectors.lightos [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.326 247428 DEBUG os_brick.utils [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:14cb718ec160', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'd27b7a41-30de-40e4-9f10-b4e4f5902919', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.327 247428 DEBUG nova.virt.block_device [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updating existing volume attachment record: b395257d-7cbf-4d21-8e10-1311855ec4d5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 26 13:48:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:43.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:43 np0005596060 nova_compute[247421]: 2026-01-26 18:48:43.897 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:44 np0005596060 nova_compute[247421]: 2026-01-26 18:48:44.095 247428 DEBUG nova.objects.instance [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lazy-loading 'flavor' on Instance uuid 4448620c-6ae7-4a36-98b9-cf616b071da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:48:44 np0005596060 nova_compute[247421]: 2026-01-26 18:48:44.117 247428 DEBUG nova.virt.libvirt.driver [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Attempting to attach volume d2427b35-a448-4de9-82a6-1f436efa15f7 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 26 13:48:44 np0005596060 nova_compute[247421]: 2026-01-26 18:48:44.121 247428 DEBUG nova.virt.libvirt.guest [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] attach device xml: <disk type="network" device="disk">
Jan 26 13:48:44 np0005596060 nova_compute[247421]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 26 13:48:44 np0005596060 nova_compute[247421]:  <source protocol="rbd" name="volumes/volume-d2427b35-a448-4de9-82a6-1f436efa15f7">
Jan 26 13:48:44 np0005596060 nova_compute[247421]:    <host name="192.168.122.100" port="6789"/>
Jan 26 13:48:44 np0005596060 nova_compute[247421]:    <host name="192.168.122.102" port="6789"/>
Jan 26 13:48:44 np0005596060 nova_compute[247421]:    <host name="192.168.122.101" port="6789"/>
Jan 26 13:48:44 np0005596060 nova_compute[247421]:  </source>
Jan 26 13:48:44 np0005596060 nova_compute[247421]:  <auth username="openstack">
Jan 26 13:48:44 np0005596060 nova_compute[247421]:    <secret type="ceph" uuid="d4cd1917-5876-51b6-bc64-65a16199754d"/>
Jan 26 13:48:44 np0005596060 nova_compute[247421]:  </auth>
Jan 26 13:48:44 np0005596060 nova_compute[247421]:  <target dev="vdb" bus="virtio"/>
Jan 26 13:48:44 np0005596060 nova_compute[247421]:  <serial>d2427b35-a448-4de9-82a6-1f436efa15f7</serial>
Jan 26 13:48:44 np0005596060 nova_compute[247421]: </disk>
Jan 26 13:48:44 np0005596060 nova_compute[247421]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:48:44
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'images', 'cephfs.cephfs.meta']
Jan 26 13:48:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:48:44 np0005596060 nova_compute[247421]: 2026-01-26 18:48:44.246 247428 DEBUG nova.virt.libvirt.driver [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:48:44 np0005596060 nova_compute[247421]: 2026-01-26 18:48:44.248 247428 DEBUG nova.virt.libvirt.driver [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:48:44 np0005596060 nova_compute[247421]: 2026-01-26 18:48:44.248 247428 DEBUG nova.virt.libvirt.driver [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 26 13:48:44 np0005596060 nova_compute[247421]: 2026-01-26 18:48:44.249 247428 DEBUG nova.virt.libvirt.driver [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] No VIF found with MAC fa:16:3e:3f:d6:09, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 26 13:48:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 26 13:48:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:44.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 26 13:48:44 np0005596060 nova_compute[247421]: 2026-01-26 18:48:44.403 247428 DEBUG oslo_concurrency.lockutils [None req-606e992a-4b5e-424d-b11a-374f0dbbadf0 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:48:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 682 B/s rd, 2.0 KiB/s wr, 0 op/s
Jan 26 13:48:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:45.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:45 np0005596060 nova_compute[247421]: 2026-01-26 18:48:45.722 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:48:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:46.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:48:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 121 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 3.4 KiB/s rd, 2.0 KiB/s wr, 2 op/s
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.569 247428 DEBUG oslo_concurrency.lockutils [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.569 247428 DEBUG oslo_concurrency.lockutils [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.582 247428 INFO nova.compute.manager [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Detaching volume d2427b35-a448-4de9-82a6-1f436efa15f7#033[00m
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.697 247428 INFO nova.virt.block_device [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Attempting to driver detach volume d2427b35-a448-4de9-82a6-1f436efa15f7 from mountpoint /dev/vdb#033[00m
Jan 26 13:48:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:48:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:47.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.704 247428 DEBUG nova.virt.libvirt.driver [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Attempting to detach device vdb from instance 4448620c-6ae7-4a36-98b9-cf616b071da7 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.704 247428 DEBUG nova.virt.libvirt.guest [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] detach device xml: <disk type="network" device="disk">
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  <source protocol="rbd" name="volumes/volume-d2427b35-a448-4de9-82a6-1f436efa15f7">
Jan 26 13:48:47 np0005596060 nova_compute[247421]:    <host name="192.168.122.100" port="6789"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:    <host name="192.168.122.102" port="6789"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:    <host name="192.168.122.101" port="6789"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  </source>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  <target dev="vdb" bus="virtio"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  <serial>d2427b35-a448-4de9-82a6-1f436efa15f7</serial>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]: </disk>
Jan 26 13:48:47 np0005596060 nova_compute[247421]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.710 247428 INFO nova.virt.libvirt.driver [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Successfully detached device vdb from instance 4448620c-6ae7-4a36-98b9-cf616b071da7 from the persistent domain config.#033[00m
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.710 247428 DEBUG nova.virt.libvirt.driver [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 4448620c-6ae7-4a36-98b9-cf616b071da7 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.711 247428 DEBUG nova.virt.libvirt.guest [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] detach device xml: <disk type="network" device="disk">
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  <source protocol="rbd" name="volumes/volume-d2427b35-a448-4de9-82a6-1f436efa15f7">
Jan 26 13:48:47 np0005596060 nova_compute[247421]:    <host name="192.168.122.100" port="6789"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:    <host name="192.168.122.102" port="6789"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:    <host name="192.168.122.101" port="6789"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  </source>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  <target dev="vdb" bus="virtio"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  <serial>d2427b35-a448-4de9-82a6-1f436efa15f7</serial>
Jan 26 13:48:47 np0005596060 nova_compute[247421]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 26 13:48:47 np0005596060 nova_compute[247421]: </disk>
Jan 26 13:48:47 np0005596060 nova_compute[247421]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.804 247428 DEBUG nova.virt.libvirt.driver [None req-b9c00ddc-b155-4326-9877-249e9edac0fb - - - - - -] Received event <DeviceRemovedEvent: 1769453327.8044646, 4448620c-6ae7-4a36-98b9-cf616b071da7 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.807 247428 DEBUG nova.virt.libvirt.driver [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 4448620c-6ae7-4a36-98b9-cf616b071da7 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 26 13:48:47 np0005596060 nova_compute[247421]: 2026-01-26 18:48:47.809 247428 INFO nova.virt.libvirt.driver [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Successfully detached device vdb from instance 4448620c-6ae7-4a36-98b9-cf616b071da7 from the live domain config.#033[00m
Jan 26 13:48:48 np0005596060 nova_compute[247421]: 2026-01-26 18:48:48.069 247428 DEBUG nova.objects.instance [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lazy-loading 'flavor' on Instance uuid 4448620c-6ae7-4a36-98b9-cf616b071da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:48:48 np0005596060 nova_compute[247421]: 2026-01-26 18:48:48.100 247428 DEBUG oslo_concurrency.lockutils [None req-20212075-5aff-47e1-bb8f-bad7daf60911 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:48:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:48.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:48:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b764045a-4a24-40d9-becc-406db6ccb54d does not exist
Jan 26 13:48:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d3b7916f-cd72-42c7-8a14-c8c35915d468 does not exist
Jan 26 13:48:48 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 56346be3-21ec-4250-99de-0a9a48150da5 does not exist
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:48:48 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:48:48 np0005596060 nova_compute[247421]: 2026-01-26 18:48:48.899 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:49 np0005596060 podman[311980]: 2026-01-26 18:48:48.918543021 +0000 UTC m=+0.023605072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:48:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 123 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 6.7 KiB/s rd, 184 KiB/s wr, 8 op/s
Jan 26 13:48:49 np0005596060 podman[311980]: 2026-01-26 18:48:49.109279504 +0000 UTC m=+0.214341535 container create 9d1d39c6045acb526e163b927cab60e4babe0fdfa68dec3f620cd97f9b261b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 26 13:48:49 np0005596060 systemd[1]: Started libpod-conmon-9d1d39c6045acb526e163b927cab60e4babe0fdfa68dec3f620cd97f9b261b37.scope.
Jan 26 13:48:49 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:48:49 np0005596060 podman[311980]: 2026-01-26 18:48:49.198346772 +0000 UTC m=+0.303408823 container init 9d1d39c6045acb526e163b927cab60e4babe0fdfa68dec3f620cd97f9b261b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:48:49 np0005596060 podman[311980]: 2026-01-26 18:48:49.206338082 +0000 UTC m=+0.311400113 container start 9d1d39c6045acb526e163b927cab60e4babe0fdfa68dec3f620cd97f9b261b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:48:49 np0005596060 podman[311980]: 2026-01-26 18:48:49.210288131 +0000 UTC m=+0.315350182 container attach 9d1d39c6045acb526e163b927cab60e4babe0fdfa68dec3f620cd97f9b261b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:48:49 np0005596060 dazzling_cannon[311997]: 167 167
Jan 26 13:48:49 np0005596060 systemd[1]: libpod-9d1d39c6045acb526e163b927cab60e4babe0fdfa68dec3f620cd97f9b261b37.scope: Deactivated successfully.
Jan 26 13:48:49 np0005596060 podman[311980]: 2026-01-26 18:48:49.2134305 +0000 UTC m=+0.318492541 container died 9d1d39c6045acb526e163b927cab60e4babe0fdfa68dec3f620cd97f9b261b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:48:49 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ea9961e8b3843c9e6921c037073bea7121ad7d37b90c389289fe7000c70bee29-merged.mount: Deactivated successfully.
Jan 26 13:48:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:48:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:48:49 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:48:49 np0005596060 podman[311980]: 2026-01-26 18:48:49.25101728 +0000 UTC m=+0.356079311 container remove 9d1d39c6045acb526e163b927cab60e4babe0fdfa68dec3f620cd97f9b261b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:48:49 np0005596060 systemd[1]: libpod-conmon-9d1d39c6045acb526e163b927cab60e4babe0fdfa68dec3f620cd97f9b261b37.scope: Deactivated successfully.
Jan 26 13:48:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Jan 26 13:48:49 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Jan 26 13:48:49 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Jan 26 13:48:49 np0005596060 podman[312020]: 2026-01-26 18:48:49.474496082 +0000 UTC m=+0.078250539 container create 498776d545f78f235e16436a0e59d8dc6d2a5752d0e4549b420d7807a2169c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:48:49 np0005596060 systemd[1]: Started libpod-conmon-498776d545f78f235e16436a0e59d8dc6d2a5752d0e4549b420d7807a2169c6a.scope.
Jan 26 13:48:49 np0005596060 podman[312020]: 2026-01-26 18:48:49.423208679 +0000 UTC m=+0.026963136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:48:49 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:48:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a764f48b3ce11b3c3439309a5844753cc6520db5d28c6578b02b2bdc19a7297/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a764f48b3ce11b3c3439309a5844753cc6520db5d28c6578b02b2bdc19a7297/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a764f48b3ce11b3c3439309a5844753cc6520db5d28c6578b02b2bdc19a7297/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a764f48b3ce11b3c3439309a5844753cc6520db5d28c6578b02b2bdc19a7297/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:49 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a764f48b3ce11b3c3439309a5844753cc6520db5d28c6578b02b2bdc19a7297/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:49 np0005596060 podman[312020]: 2026-01-26 18:48:49.566261079 +0000 UTC m=+0.170015556 container init 498776d545f78f235e16436a0e59d8dc6d2a5752d0e4549b420d7807a2169c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:48:49 np0005596060 podman[312020]: 2026-01-26 18:48:49.579995962 +0000 UTC m=+0.183750459 container start 498776d545f78f235e16436a0e59d8dc6d2a5752d0e4549b420d7807a2169c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:48:49 np0005596060 podman[312020]: 2026-01-26 18:48:49.584352451 +0000 UTC m=+0.188106908 container attach 498776d545f78f235e16436a0e59d8dc6d2a5752d0e4549b420d7807a2169c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 13:48:49 np0005596060 nova_compute[247421]: 2026-01-26 18:48:49.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:48:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:49.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:50.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:50 np0005596060 magical_stonebraker[312038]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:48:50 np0005596060 magical_stonebraker[312038]: --> relative data size: 1.0
Jan 26 13:48:50 np0005596060 magical_stonebraker[312038]: --> All data devices are unavailable
Jan 26 13:48:50 np0005596060 systemd[1]: libpod-498776d545f78f235e16436a0e59d8dc6d2a5752d0e4549b420d7807a2169c6a.scope: Deactivated successfully.
Jan 26 13:48:50 np0005596060 podman[312020]: 2026-01-26 18:48:50.420505983 +0000 UTC m=+1.024260520 container died 498776d545f78f235e16436a0e59d8dc6d2a5752d0e4549b420d7807a2169c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 26 13:48:50 np0005596060 systemd[1]: var-lib-containers-storage-overlay-2a764f48b3ce11b3c3439309a5844753cc6520db5d28c6578b02b2bdc19a7297-merged.mount: Deactivated successfully.
Jan 26 13:48:50 np0005596060 podman[312020]: 2026-01-26 18:48:50.47031006 +0000 UTC m=+1.074064517 container remove 498776d545f78f235e16436a0e59d8dc6d2a5752d0e4549b420d7807a2169c6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_stonebraker, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:48:50 np0005596060 systemd[1]: libpod-conmon-498776d545f78f235e16436a0e59d8dc6d2a5752d0e4549b420d7807a2169c6a.scope: Deactivated successfully.
Jan 26 13:48:50 np0005596060 nova_compute[247421]: 2026-01-26 18:48:50.627 247428 DEBUG nova.compute.manager [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:48:50 np0005596060 nova_compute[247421]: 2026-01-26 18:48:50.668 247428 INFO nova.compute.manager [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] instance snapshotting#033[00m
Jan 26 13:48:50 np0005596060 nova_compute[247421]: 2026-01-26 18:48:50.725 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:50 np0005596060 nova_compute[247421]: 2026-01-26 18:48:50.916 247428 INFO nova.virt.libvirt.driver [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Beginning live snapshot process#033[00m
Jan 26 13:48:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 123 MiB data, 441 MiB used, 21 GiB / 21 GiB avail; 8.1 KiB/s rd, 221 KiB/s wr, 9 op/s
Jan 26 13:48:51 np0005596060 nova_compute[247421]: 2026-01-26 18:48:51.060 247428 DEBUG nova.virt.libvirt.imagebackend [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] No parent info for 57de5960-c1c5-4cfa-af34-8f58cf25f585; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 26 13:48:51 np0005596060 podman[312212]: 2026-01-26 18:48:51.062015386 +0000 UTC m=+0.038625638 container create 88ce4eb47b12fd42cb22bcdacfb5b55f6933e66e307d2912273f799b21e4d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 13:48:51 np0005596060 systemd[1]: Started libpod-conmon-88ce4eb47b12fd42cb22bcdacfb5b55f6933e66e307d2912273f799b21e4d27d.scope.
Jan 26 13:48:51 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:48:51 np0005596060 podman[312212]: 2026-01-26 18:48:51.130582262 +0000 UTC m=+0.107192534 container init 88ce4eb47b12fd42cb22bcdacfb5b55f6933e66e307d2912273f799b21e4d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 26 13:48:51 np0005596060 podman[312212]: 2026-01-26 18:48:51.137004783 +0000 UTC m=+0.113615035 container start 88ce4eb47b12fd42cb22bcdacfb5b55f6933e66e307d2912273f799b21e4d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:48:51 np0005596060 podman[312212]: 2026-01-26 18:48:51.139987087 +0000 UTC m=+0.116597339 container attach 88ce4eb47b12fd42cb22bcdacfb5b55f6933e66e307d2912273f799b21e4d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:48:51 np0005596060 elastic_mcclintock[312255]: 167 167
Jan 26 13:48:51 np0005596060 systemd[1]: libpod-88ce4eb47b12fd42cb22bcdacfb5b55f6933e66e307d2912273f799b21e4d27d.scope: Deactivated successfully.
Jan 26 13:48:51 np0005596060 podman[312212]: 2026-01-26 18:48:51.047574725 +0000 UTC m=+0.024184997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:48:51 np0005596060 podman[312212]: 2026-01-26 18:48:51.143082865 +0000 UTC m=+0.119693117 container died 88ce4eb47b12fd42cb22bcdacfb5b55f6933e66e307d2912273f799b21e4d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:48:51 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ca99b5ae1e3f805b4be91484074a01f541d9b6e6d0a7f238d617aac624a1daa2-merged.mount: Deactivated successfully.
Jan 26 13:48:51 np0005596060 podman[312212]: 2026-01-26 18:48:51.178041019 +0000 UTC m=+0.154651271 container remove 88ce4eb47b12fd42cb22bcdacfb5b55f6933e66e307d2912273f799b21e4d27d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mcclintock, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:48:51 np0005596060 systemd[1]: libpod-conmon-88ce4eb47b12fd42cb22bcdacfb5b55f6933e66e307d2912273f799b21e4d27d.scope: Deactivated successfully.
Jan 26 13:48:51 np0005596060 nova_compute[247421]: 2026-01-26 18:48:51.292 247428 DEBUG nova.storage.rbd_utils [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] creating snapshot(459cc05b92b44e0fa92e74754e5b1330) on rbd image(4448620c-6ae7-4a36-98b9-cf616b071da7_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 26 13:48:51 np0005596060 podman[312277]: 2026-01-26 18:48:51.342122765 +0000 UTC m=+0.045805907 container create cc4c41bc8cbc6952405c54e567b370cde8a3c74da5085f0d41effba9d2f9d37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:48:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Jan 26 13:48:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Jan 26 13:48:51 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Jan 26 13:48:51 np0005596060 systemd[1]: Started libpod-conmon-cc4c41bc8cbc6952405c54e567b370cde8a3c74da5085f0d41effba9d2f9d37d.scope.
Jan 26 13:48:51 np0005596060 nova_compute[247421]: 2026-01-26 18:48:51.406 247428 DEBUG nova.storage.rbd_utils [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] cloning vms/4448620c-6ae7-4a36-98b9-cf616b071da7_disk@459cc05b92b44e0fa92e74754e5b1330 to images/3d5ea434-8f77-47de-b162-18bfa54d5fef clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 26 13:48:51 np0005596060 podman[312277]: 2026-01-26 18:48:51.323370716 +0000 UTC m=+0.027053878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:48:51 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:48:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6765ec3aebc97472711002b819a85623025467f50dbc2f7c2173856a79a68fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6765ec3aebc97472711002b819a85623025467f50dbc2f7c2173856a79a68fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6765ec3aebc97472711002b819a85623025467f50dbc2f7c2173856a79a68fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:51 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6765ec3aebc97472711002b819a85623025467f50dbc2f7c2173856a79a68fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:51 np0005596060 podman[312277]: 2026-01-26 18:48:51.445330438 +0000 UTC m=+0.149013600 container init cc4c41bc8cbc6952405c54e567b370cde8a3c74da5085f0d41effba9d2f9d37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 26 13:48:51 np0005596060 podman[312277]: 2026-01-26 18:48:51.452126898 +0000 UTC m=+0.155810040 container start cc4c41bc8cbc6952405c54e567b370cde8a3c74da5085f0d41effba9d2f9d37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:48:51 np0005596060 podman[312277]: 2026-01-26 18:48:51.454688162 +0000 UTC m=+0.158371324 container attach cc4c41bc8cbc6952405c54e567b370cde8a3c74da5085f0d41effba9d2f9d37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:48:51 np0005596060 nova_compute[247421]: 2026-01-26 18:48:51.542 247428 DEBUG nova.storage.rbd_utils [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] flattening images/3d5ea434-8f77-47de-b162-18bfa54d5fef flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 26 13:48:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:51.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:52 np0005596060 nova_compute[247421]: 2026-01-26 18:48:52.002 247428 DEBUG nova.storage.rbd_utils [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] removing snapshot(459cc05b92b44e0fa92e74754e5b1330) on rbd image(4448620c-6ae7-4a36-98b9-cf616b071da7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]: {
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:    "1": [
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:        {
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "devices": [
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "/dev/loop3"
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            ],
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "lv_name": "ceph_lv0",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "lv_size": "7511998464",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "name": "ceph_lv0",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "tags": {
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.cluster_name": "ceph",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.crush_device_class": "",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.encrypted": "0",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.osd_id": "1",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.type": "block",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:                "ceph.vdo": "0"
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            },
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "type": "block",
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:            "vg_name": "ceph_vg0"
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:        }
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]:    ]
Jan 26 13:48:52 np0005596060 mystifying_sanderson[312311]: }
Jan 26 13:48:52 np0005596060 systemd[1]: libpod-cc4c41bc8cbc6952405c54e567b370cde8a3c74da5085f0d41effba9d2f9d37d.scope: Deactivated successfully.
Jan 26 13:48:52 np0005596060 podman[312277]: 2026-01-26 18:48:52.265352737 +0000 UTC m=+0.969035909 container died cc4c41bc8cbc6952405c54e567b370cde8a3c74da5085f0d41effba9d2f9d37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 13:48:52 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b6765ec3aebc97472711002b819a85623025467f50dbc2f7c2173856a79a68fe-merged.mount: Deactivated successfully.
Jan 26 13:48:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:52.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:52 np0005596060 podman[312277]: 2026-01-26 18:48:52.325542743 +0000 UTC m=+1.029225885 container remove cc4c41bc8cbc6952405c54e567b370cde8a3c74da5085f0d41effba9d2f9d37d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_sanderson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 26 13:48:52 np0005596060 systemd[1]: libpod-conmon-cc4c41bc8cbc6952405c54e567b370cde8a3c74da5085f0d41effba9d2f9d37d.scope: Deactivated successfully.
Jan 26 13:48:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Jan 26 13:48:52 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Jan 26 13:48:52 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Jan 26 13:48:52 np0005596060 nova_compute[247421]: 2026-01-26 18:48:52.532 247428 DEBUG nova.storage.rbd_utils [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] creating snapshot(snap) on rbd image(3d5ea434-8f77-47de-b162-18bfa54d5fef) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 26 13:48:52 np0005596060 podman[312564]: 2026-01-26 18:48:52.977850946 +0000 UTC m=+0.050029183 container create 7f84207a28023eb72c4542d337273f99a19d84b5585d7c98ea5ad81c341d205c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:48:53 np0005596060 systemd[1]: Started libpod-conmon-7f84207a28023eb72c4542d337273f99a19d84b5585d7c98ea5ad81c341d205c.scope.
Jan 26 13:48:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 167 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 7.7 MiB/s rd, 5.2 MiB/s wr, 148 op/s
Jan 26 13:48:53 np0005596060 podman[312564]: 2026-01-26 18:48:52.955131078 +0000 UTC m=+0.027309295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:48:53 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:48:53 np0005596060 podman[312564]: 2026-01-26 18:48:53.067277074 +0000 UTC m=+0.139455281 container init 7f84207a28023eb72c4542d337273f99a19d84b5585d7c98ea5ad81c341d205c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 13:48:53 np0005596060 podman[312564]: 2026-01-26 18:48:53.074565106 +0000 UTC m=+0.146743303 container start 7f84207a28023eb72c4542d337273f99a19d84b5585d7c98ea5ad81c341d205c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:48:53 np0005596060 podman[312564]: 2026-01-26 18:48:53.078617688 +0000 UTC m=+0.150795875 container attach 7f84207a28023eb72c4542d337273f99a19d84b5585d7c98ea5ad81c341d205c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:48:53 np0005596060 festive_ptolemy[312580]: 167 167
Jan 26 13:48:53 np0005596060 systemd[1]: libpod-7f84207a28023eb72c4542d337273f99a19d84b5585d7c98ea5ad81c341d205c.scope: Deactivated successfully.
Jan 26 13:48:53 np0005596060 podman[312564]: 2026-01-26 18:48:53.080747361 +0000 UTC m=+0.152925558 container died 7f84207a28023eb72c4542d337273f99a19d84b5585d7c98ea5ad81c341d205c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:48:53 np0005596060 systemd[1]: var-lib-containers-storage-overlay-164f53e3f9c47a79d1002cbf7f3691af4562eaab8944c0fd293f8b12816f3914-merged.mount: Deactivated successfully.
Jan 26 13:48:53 np0005596060 podman[312564]: 2026-01-26 18:48:53.119130831 +0000 UTC m=+0.191309028 container remove 7f84207a28023eb72c4542d337273f99a19d84b5585d7c98ea5ad81c341d205c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_ptolemy, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 26 13:48:53 np0005596060 systemd[1]: libpod-conmon-7f84207a28023eb72c4542d337273f99a19d84b5585d7c98ea5ad81c341d205c.scope: Deactivated successfully.
Jan 26 13:48:53 np0005596060 podman[312604]: 2026-01-26 18:48:53.302769066 +0000 UTC m=+0.037041797 container create 8415868a1c78d48f25bb7daea714e406b2ecc90142b1bffa9e74d3c4e57b3e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:48:53 np0005596060 systemd[1]: Started libpod-conmon-8415868a1c78d48f25bb7daea714e406b2ecc90142b1bffa9e74d3c4e57b3e48.scope.
Jan 26 13:48:53 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:48:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6da20410eb69d26bf99e48311f4177e5698ea91614001d6b3a470581dc7d42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6da20410eb69d26bf99e48311f4177e5698ea91614001d6b3a470581dc7d42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6da20410eb69d26bf99e48311f4177e5698ea91614001d6b3a470581dc7d42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:53 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d6da20410eb69d26bf99e48311f4177e5698ea91614001d6b3a470581dc7d42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:48:53 np0005596060 podman[312604]: 2026-01-26 18:48:53.287261069 +0000 UTC m=+0.021533830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:48:53 np0005596060 podman[312604]: 2026-01-26 18:48:53.389397404 +0000 UTC m=+0.123670165 container init 8415868a1c78d48f25bb7daea714e406b2ecc90142b1bffa9e74d3c4e57b3e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:48:53 np0005596060 podman[312604]: 2026-01-26 18:48:53.395446206 +0000 UTC m=+0.129718937 container start 8415868a1c78d48f25bb7daea714e406b2ecc90142b1bffa9e74d3c4e57b3e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:48:53 np0005596060 podman[312604]: 2026-01-26 18:48:53.398877951 +0000 UTC m=+0.133150682 container attach 8415868a1c78d48f25bb7daea714e406b2ecc90142b1bffa9e74d3c4e57b3e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:48:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Jan 26 13:48:53 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Jan 26 13:48:53 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Jan 26 13:48:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:53.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:53 np0005596060 nova_compute[247421]: 2026-01-26 18:48:53.933 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:54 np0005596060 magical_meitner[312620]: {
Jan 26 13:48:54 np0005596060 magical_meitner[312620]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:48:54 np0005596060 magical_meitner[312620]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:48:54 np0005596060 magical_meitner[312620]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:48:54 np0005596060 magical_meitner[312620]:        "osd_id": 1,
Jan 26 13:48:54 np0005596060 magical_meitner[312620]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:48:54 np0005596060 magical_meitner[312620]:        "type": "bluestore"
Jan 26 13:48:54 np0005596060 magical_meitner[312620]:    }
Jan 26 13:48:54 np0005596060 magical_meitner[312620]: }
Jan 26 13:48:54 np0005596060 systemd[1]: libpod-8415868a1c78d48f25bb7daea714e406b2ecc90142b1bffa9e74d3c4e57b3e48.scope: Deactivated successfully.
Jan 26 13:48:54 np0005596060 podman[312604]: 2026-01-26 18:48:54.259797303 +0000 UTC m=+0.994070054 container died 8415868a1c78d48f25bb7daea714e406b2ecc90142b1bffa9e74d3c4e57b3e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:48:54 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1d6da20410eb69d26bf99e48311f4177e5698ea91614001d6b3a470581dc7d42-merged.mount: Deactivated successfully.
Jan 26 13:48:54 np0005596060 podman[312604]: 2026-01-26 18:48:54.319860996 +0000 UTC m=+1.054133727 container remove 8415868a1c78d48f25bb7daea714e406b2ecc90142b1bffa9e74d3c4e57b3e48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 13:48:54 np0005596060 systemd[1]: libpod-conmon-8415868a1c78d48f25bb7daea714e406b2ecc90142b1bffa9e74d3c4e57b3e48.scope: Deactivated successfully.
Jan 26 13:48:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:54.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:48:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:48:54 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:48:54 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:48:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b3133205-8d9f-4ed9-871d-b3a82680b91b does not exist
Jan 26 13:48:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 56f8083e-22d0-4aa3-8738-6826dff7027a does not exist
Jan 26 13:48:54 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 56187a5b-ca01-4bab-939c-a56e437ce890 does not exist
Jan 26 13:48:54 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:48:54 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:48:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:48:54 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 11K writes, 49K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1597 writes, 7001 keys, 1597 commit groups, 1.0 writes per commit group, ingest: 10.71 MB, 0.02 MB/s#012Interval WAL: 1597 writes, 1597 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     20.9      3.24              0.24        32    0.101       0      0       0.0       0.0#012  L6      1/0    9.81 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.1     77.8     64.5      4.36              0.90        31    0.141    185K    17K       0.0       0.0#012 Sum      1/0    9.81 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.1     44.6     45.9      7.61              1.13        63    0.121    185K    17K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.3    140.8    139.6      0.53              0.23        12    0.044     45K   3087       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0     77.8     64.5      4.36              0.90        31    0.141    185K    17K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     20.9      3.24              0.24        31    0.105       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.066, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.34 GB write, 0.08 MB/s write, 0.33 GB read, 0.08 MB/s read, 7.6 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5652937211f0#2 capacity: 304.00 MB usage: 40.10 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000263 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2302,38.72 MB,12.7372%) FilterBlock(64,529.36 KB,0.17005%) IndexBlock(64,882.02 KB,0.283337%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 26 13:48:54 np0005596060 nova_compute[247421]: 2026-01-26 18:48:54.710 247428 INFO nova.virt.libvirt.driver [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Snapshot image upload complete#033[00m
Jan 26 13:48:54 np0005596060 nova_compute[247421]: 2026-01-26 18:48:54.711 247428 INFO nova.compute.manager [None req-2b0b5cbe-2c96-4200-8166-ca9f26dfc0a8 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Took 4.04 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 26 13:48:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 167 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 8.1 MiB/s rd, 5.1 MiB/s wr, 144 op/s
Jan 26 13:48:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:55.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:55 np0005596060 nova_compute[247421]: 2026-01-26 18:48:55.728 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:56.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:48:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 202 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 168 op/s
Jan 26 13:48:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:48:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:57.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:48:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:48:58.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:48:58 np0005596060 nova_compute[247421]: 2026-01-26 18:48:58.934 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:48:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 202 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 6.2 MiB/s rd, 6.1 MiB/s wr, 166 op/s
Jan 26 13:48:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:48:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:48:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:48:59.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:00.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:00 np0005596060 nova_compute[247421]: 2026-01-26 18:49:00.732 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 202 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 126 KiB/s rd, 2.0 MiB/s wr, 53 op/s
Jan 26 13:49:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:01.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Jan 26 13:49:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Jan 26 13:49:01 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Jan 26 13:49:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:02.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 202 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 128 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Jan 26 13:49:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:03.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:03 np0005596060 nova_compute[247421]: 2026-01-26 18:49:03.936 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00217504623813512 of space, bias 1.0, pg target 0.652513871440536 quantized to 32 (current 32)
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.433015773311e-05 of space, bias 1.0, pg target 0.028299047319932998 quantized to 32 (current 32)
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004070737076586637 of space, bias 1.0, pg target 1.221221122975991 quantized to 32 (current 32)
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 16)
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:49:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 26 13:49:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:04.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 202 MiB data, 490 MiB used, 21 GiB / 21 GiB avail; 121 KiB/s rd, 1.7 MiB/s wr, 63 op/s
Jan 26 13:49:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Jan 26 13:49:05 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Jan 26 13:49:05 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Jan 26 13:49:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:05.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:05 np0005596060 nova_compute[247421]: 2026-01-26 18:49:05.735 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:06.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 159 MiB data, 484 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 639 B/s wr, 27 op/s
Jan 26 13:49:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:07.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:08.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Jan 26 13:49:08 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Jan 26 13:49:08 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Jan 26 13:49:08 np0005596060 nova_compute[247421]: 2026-01-26 18:49:08.939 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 123 MiB data, 461 MiB used, 21 GiB / 21 GiB avail; 43 KiB/s rd, 2.6 KiB/s wr, 62 op/s
Jan 26 13:49:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:09.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:09.789 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '16:b1:dd', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:cd:89:5f:28:db'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:49:09 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:09.791 159331 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 26 13:49:09 np0005596060 nova_compute[247421]: 2026-01-26 18:49:09.790 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.029 247428 DEBUG nova.compute.manager [req-66a9f257-0d29-45e6-baaf-50da2a8f960f req-f8eddf39-51a9-42bc-a4da-14dfb8ec6a11 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received event network-changed-c81857f7-d034-41c1-8f0f-2d11c566b9fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.029 247428 DEBUG nova.compute.manager [req-66a9f257-0d29-45e6-baaf-50da2a8f960f req-f8eddf39-51a9-42bc-a4da-14dfb8ec6a11 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Refreshing instance network info cache due to event network-changed-c81857f7-d034-41c1-8f0f-2d11c566b9fa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.029 247428 DEBUG oslo_concurrency.lockutils [req-66a9f257-0d29-45e6-baaf-50da2a8f960f req-f8eddf39-51a9-42bc-a4da-14dfb8ec6a11 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.030 247428 DEBUG oslo_concurrency.lockutils [req-66a9f257-0d29-45e6-baaf-50da2a8f960f req-f8eddf39-51a9-42bc-a4da-14dfb8ec6a11 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquired lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.030 247428 DEBUG nova.network.neutron [req-66a9f257-0d29-45e6-baaf-50da2a8f960f req-f8eddf39-51a9-42bc-a4da-14dfb8ec6a11 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Refreshing network info cache for port c81857f7-d034-41c1-8f0f-2d11c566b9fa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.116 247428 DEBUG oslo_concurrency.lockutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.117 247428 DEBUG oslo_concurrency.lockutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.117 247428 DEBUG oslo_concurrency.lockutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.118 247428 DEBUG oslo_concurrency.lockutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.118 247428 DEBUG oslo_concurrency.lockutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.119 247428 INFO nova.compute.manager [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Terminating instance#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.120 247428 DEBUG nova.compute.manager [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 26 13:49:10 np0005596060 kernel: tapc81857f7-d0 (unregistering): left promiscuous mode
Jan 26 13:49:10 np0005596060 NetworkManager[48900]: <info>  [1769453350.1779] device (tapc81857f7-d0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 13:49:10 np0005596060 ovn_controller[148842]: 2026-01-26T18:49:10Z|00220|binding|INFO|Releasing lport c81857f7-d034-41c1-8f0f-2d11c566b9fa from this chassis (sb_readonly=0)
Jan 26 13:49:10 np0005596060 ovn_controller[148842]: 2026-01-26T18:49:10Z|00221|binding|INFO|Setting lport c81857f7-d034-41c1-8f0f-2d11c566b9fa down in Southbound
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.185 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:10 np0005596060 ovn_controller[148842]: 2026-01-26T18:49:10Z|00222|binding|INFO|Removing iface tapc81857f7-d0 ovn-installed in OVS
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.187 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.193 159331 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:d6:09 10.100.0.7'], port_security=['fa:16:3e:3f:d6:09 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4448620c-6ae7-4a36-98b9-cf616b071da7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7028e9ff-4580-4927-a34a-bf2749f519c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0d840e2f88d463da0429813ca3c3914', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e4f45d30-450b-4f84-9f37-19af8e10da2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=16f45680-7acc-4bd4-acd3-31941d09daad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>], logical_port=c81857f7-d034-41c1-8f0f-2d11c566b9fa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa9b0acd910>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.195 159331 INFO neutron.agent.ovn.metadata.agent [-] Port c81857f7-d034-41c1-8f0f-2d11c566b9fa in datapath 7028e9ff-4580-4927-a34a-bf2749f519c0 unbound from our chassis#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.196 159331 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7028e9ff-4580-4927-a34a-bf2749f519c0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.198 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3977c6-b259-4d04-b366-1ffa062470dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.199 159331 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0 namespace which is not needed anymore#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.213 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:10 np0005596060 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Jan 26 13:49:10 np0005596060 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000001f.scope: Consumed 15.560s CPU time.
Jan 26 13:49:10 np0005596060 systemd-machined[213879]: Machine qemu-18-instance-0000001f terminated.
Jan 26 13:49:10 np0005596060 neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0[311408]: [NOTICE]   (311412) : haproxy version is 2.8.14-c23fe91
Jan 26 13:49:10 np0005596060 neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0[311408]: [NOTICE]   (311412) : path to executable is /usr/sbin/haproxy
Jan 26 13:49:10 np0005596060 neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0[311408]: [WARNING]  (311412) : Exiting Master process...
Jan 26 13:49:10 np0005596060 neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0[311408]: [ALERT]    (311412) : Current worker (311414) exited with code 143 (Terminated)
Jan 26 13:49:10 np0005596060 neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0[311408]: [WARNING]  (311412) : All workers exited. Exiting... (0)
Jan 26 13:49:10 np0005596060 systemd[1]: libpod-cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8.scope: Deactivated successfully.
Jan 26 13:49:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:10.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:10 np0005596060 podman[312788]: 2026-01-26 18:49:10.345694518 +0000 UTC m=+0.047677214 container died cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.364 247428 INFO nova.virt.libvirt.driver [-] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Instance destroyed successfully.#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.366 247428 DEBUG nova.objects.instance [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lazy-loading 'resources' on Instance uuid 4448620c-6ae7-4a36-98b9-cf616b071da7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 26 13:49:10 np0005596060 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8-userdata-shm.mount: Deactivated successfully.
Jan 26 13:49:10 np0005596060 systemd[1]: var-lib-containers-storage-overlay-ecd0f407e085efffa09cf99af919136a49fa5fc7c7c896f8451e31ba79ab57f7-merged.mount: Deactivated successfully.
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.389 247428 DEBUG nova.virt.libvirt.vif [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T18:47:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-853570668',display_name='tempest-TestStampPattern-server-853570668',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-853570668',id=31,image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOPlHELE9ANDmnjPtXRweIc6NLWB4tjRssupdTbbRXJphUqr4KnPaqzvrgCYinLJGLkYacL40FbC5LaSigcqHxaArN4zqgbgumBJ+u494ihSKQ6ae+o9uIVi5vvtty16tQ==',key_name='tempest-TestStampPattern-298146011',keypairs=<?>,launch_index=0,launched_at=2026-01-26T18:48:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d0d840e2f88d463da0429813ca3c3914',ramdisk_id='',reservation_id='r-s501ti00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='57de5960-c1c5-4cfa-af34-8f58cf25f585',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-200466580',owner_user_name='tempest-TestStampPattern-200466580-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T18:48:54Z,user_data=None,user_id='607a744d16234868b129a11863dd5515',uuid=4448620c-6ae7-4a36-98b9-cf616b071da7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.390 247428 DEBUG nova.network.os_vif_util [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Converting VIF {"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.391 247428 DEBUG nova.network.os_vif_util [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:d6:09,bridge_name='br-int',has_traffic_filtering=True,id=c81857f7-d034-41c1-8f0f-2d11c566b9fa,network=Network(7028e9ff-4580-4927-a34a-bf2749f519c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc81857f7-d0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 26 13:49:10 np0005596060 podman[312788]: 2026-01-26 18:49:10.391912245 +0000 UTC m=+0.093894941 container cleanup cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.391 247428 DEBUG os_vif [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:d6:09,bridge_name='br-int',has_traffic_filtering=True,id=c81857f7-d034-41c1-8f0f-2d11c566b9fa,network=Network(7028e9ff-4580-4927-a34a-bf2749f519c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc81857f7-d0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.394 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.394 247428 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc81857f7-d0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.398 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 26 13:49:10 np0005596060 systemd[1]: libpod-conmon-cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8.scope: Deactivated successfully.
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.402 247428 INFO os_vif [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:d6:09,bridge_name='br-int',has_traffic_filtering=True,id=c81857f7-d034-41c1-8f0f-2d11c566b9fa,network=Network(7028e9ff-4580-4927-a34a-bf2749f519c0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc81857f7-d0')#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.437 247428 DEBUG nova.compute.manager [req-f4b9b5f6-c6bb-419c-831c-957bb7566100 req-423124f6-3ad5-462d-8db3-d145d17a1231 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received event network-vif-unplugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.437 247428 DEBUG oslo_concurrency.lockutils [req-f4b9b5f6-c6bb-419c-831c-957bb7566100 req-423124f6-3ad5-462d-8db3-d145d17a1231 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.437 247428 DEBUG oslo_concurrency.lockutils [req-f4b9b5f6-c6bb-419c-831c-957bb7566100 req-423124f6-3ad5-462d-8db3-d145d17a1231 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.438 247428 DEBUG oslo_concurrency.lockutils [req-f4b9b5f6-c6bb-419c-831c-957bb7566100 req-423124f6-3ad5-462d-8db3-d145d17a1231 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.438 247428 DEBUG nova.compute.manager [req-f4b9b5f6-c6bb-419c-831c-957bb7566100 req-423124f6-3ad5-462d-8db3-d145d17a1231 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] No waiting events found dispatching network-vif-unplugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.438 247428 DEBUG nova.compute.manager [req-f4b9b5f6-c6bb-419c-831c-957bb7566100 req-423124f6-3ad5-462d-8db3-d145d17a1231 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received event network-vif-unplugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 26 13:49:10 np0005596060 podman[312827]: 2026-01-26 18:49:10.469684931 +0000 UTC m=+0.049030418 container remove cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.477 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[a545e058-91da-4a49-9e6c-f39a43738fcc]: (4, ('Mon Jan 26 06:49:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0 (cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8)\ncffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8\nMon Jan 26 06:49:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0 (cffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8)\ncffd12126b2774efa22a97caebf36fce50740a0db415d19fac603e0f2938dda8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.479 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[d8312b57-d537-4961-b8f1-c6112b7f658f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.480 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7028e9ff-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:49:10 np0005596060 kernel: tap7028e9ff-40: left promiscuous mode
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.483 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.497 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.500 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d8ddbe-e03c-4db7-80c6-399ffaceec98]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.520 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[2c43b987-295f-4cd2-9305-4d7ef1e84f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.522 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4c8623-fe7c-48cb-a5c0-5d23eb63180d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.537 253549 DEBUG oslo.privsep.daemon [-] privsep: reply[880608ef-9993-45e6-b350-2646b2c4f0d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 705131, 'reachable_time': 35187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312858, 'error': None, 'target': 'ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:49:10 np0005596060 systemd[1]: run-netns-ovnmeta\x2d7028e9ff\x2d4580\x2d4927\x2da34a\x2dbf2749f519c0.mount: Deactivated successfully.
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.541 160107 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7028e9ff-4580-4927-a34a-bf2749f519c0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 26 13:49:10 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:10.541 160107 DEBUG oslo.privsep.daemon [-] privsep: reply[405967e3-01c1-4515-a725-76ad43932a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.789 247428 INFO nova.virt.libvirt.driver [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Deleting instance files /var/lib/nova/instances/4448620c-6ae7-4a36-98b9-cf616b071da7_del#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.789 247428 INFO nova.virt.libvirt.driver [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Deletion of /var/lib/nova/instances/4448620c-6ae7-4a36-98b9-cf616b071da7_del complete#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.852 247428 INFO nova.compute.manager [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.854 247428 DEBUG oslo.service.loopingcall [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.854 247428 DEBUG nova.compute.manager [-] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 26 13:49:10 np0005596060 nova_compute[247421]: 2026-01-26 18:49:10.855 247428 DEBUG nova.network.neutron [-] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 26 13:49:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 123 MiB data, 461 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 2.4 KiB/s wr, 38 op/s
Jan 26 13:49:11 np0005596060 nova_compute[247421]: 2026-01-26 18:49:11.332 247428 DEBUG nova.network.neutron [req-66a9f257-0d29-45e6-baaf-50da2a8f960f req-f8eddf39-51a9-42bc-a4da-14dfb8ec6a11 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updated VIF entry in instance network info cache for port c81857f7-d034-41c1-8f0f-2d11c566b9fa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 26 13:49:11 np0005596060 nova_compute[247421]: 2026-01-26 18:49:11.333 247428 DEBUG nova.network.neutron [req-66a9f257-0d29-45e6-baaf-50da2a8f960f req-f8eddf39-51a9-42bc-a4da-14dfb8ec6a11 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updating instance_info_cache with network_info: [{"id": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "address": "fa:16:3e:3f:d6:09", "network": {"id": "7028e9ff-4580-4927-a34a-bf2749f519c0", "bridge": "br-int", "label": "tempest-TestStampPattern-1383453040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0d840e2f88d463da0429813ca3c3914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc81857f7-d0", "ovs_interfaceid": "c81857f7-d034-41c1-8f0f-2d11c566b9fa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:49:11 np0005596060 nova_compute[247421]: 2026-01-26 18:49:11.360 247428 DEBUG oslo_concurrency.lockutils [req-66a9f257-0d29-45e6-baaf-50da2a8f960f req-f8eddf39-51a9-42bc-a4da-14dfb8ec6a11 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Releasing lock "refresh_cache-4448620c-6ae7-4a36-98b9-cf616b071da7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 26 13:49:11 np0005596060 nova_compute[247421]: 2026-01-26 18:49:11.680 247428 DEBUG nova.network.neutron [-] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 26 13:49:11 np0005596060 nova_compute[247421]: 2026-01-26 18:49:11.695 247428 INFO nova.compute.manager [-] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Took 0.84 seconds to deallocate network for instance.#033[00m
Jan 26 13:49:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:11.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:11 np0005596060 nova_compute[247421]: 2026-01-26 18:49:11.737 247428 DEBUG oslo_concurrency.lockutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:49:11 np0005596060 nova_compute[247421]: 2026-01-26 18:49:11.738 247428 DEBUG oslo_concurrency.lockutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:49:11 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:11.793 159331 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c76f2593-4bbb-4cef-b447-9e180245ada6, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 26 13:49:11 np0005596060 nova_compute[247421]: 2026-01-26 18:49:11.806 247428 DEBUG oslo_concurrency.processutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:49:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Jan 26 13:49:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Jan 26 13:49:11 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.120 247428 DEBUG nova.compute.manager [req-a46b948b-c030-4c14-8b65-0e66a671eba4 req-f7b9a83f-c779-46b9-b364-6b161d71e908 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received event network-vif-deleted-c81857f7-d034-41c1-8f0f-2d11c566b9fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:49:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:49:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3743317807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.234 247428 DEBUG oslo_concurrency.processutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.239 247428 DEBUG nova.compute.provider_tree [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.253 247428 DEBUG nova.scheduler.client.report [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:49:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:12.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.358 247428 DEBUG oslo_concurrency.lockutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.388 247428 INFO nova.scheduler.client.report [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Deleted allocations for instance 4448620c-6ae7-4a36-98b9-cf616b071da7#033[00m
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.455 247428 DEBUG oslo_concurrency.lockutils [None req-1a003af7-e30a-4af8-9ac8-08eb068db4d5 607a744d16234868b129a11863dd5515 d0d840e2f88d463da0429813ca3c3914 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.338s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.513 247428 DEBUG nova.compute.manager [req-e0080752-8742-444c-aba5-0f48faa101ef req-219f21e8-fc8e-4ece-961b-6679a3ab5a34 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received event network-vif-plugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.514 247428 DEBUG oslo_concurrency.lockutils [req-e0080752-8742-444c-aba5-0f48faa101ef req-219f21e8-fc8e-4ece-961b-6679a3ab5a34 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Acquiring lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.514 247428 DEBUG oslo_concurrency.lockutils [req-e0080752-8742-444c-aba5-0f48faa101ef req-219f21e8-fc8e-4ece-961b-6679a3ab5a34 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.514 247428 DEBUG oslo_concurrency.lockutils [req-e0080752-8742-444c-aba5-0f48faa101ef req-219f21e8-fc8e-4ece-961b-6679a3ab5a34 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] Lock "4448620c-6ae7-4a36-98b9-cf616b071da7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.514 247428 DEBUG nova.compute.manager [req-e0080752-8742-444c-aba5-0f48faa101ef req-219f21e8-fc8e-4ece-961b-6679a3ab5a34 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] No waiting events found dispatching network-vif-plugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 26 13:49:12 np0005596060 nova_compute[247421]: 2026-01-26 18:49:12.515 247428 WARNING nova.compute.manager [req-e0080752-8742-444c-aba5-0f48faa101ef req-219f21e8-fc8e-4ece-961b-6679a3ab5a34 7c80cb855ca14686bf519248f6e32904 f838374af7b94395a3a022cf51817435 - - default default] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Received unexpected event network-vif-plugged-c81857f7-d034-41c1-8f0f-2d11c566b9fa for instance with vm_state deleted and task_state None.#033[00m
Jan 26 13:49:12 np0005596060 podman[312883]: 2026-01-26 18:49:12.787390246 +0000 UTC m=+0.052050473 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 13:49:12 np0005596060 podman[312884]: 2026-01-26 18:49:12.817028708 +0000 UTC m=+0.080314921 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 13:49:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 43 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 65 KiB/s rd, 5.1 KiB/s wr, 96 op/s
Jan 26 13:49:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:49:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/728915358' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:49:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:49:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/728915358' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:49:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:13.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:13 np0005596060 nova_compute[247421]: 2026-01-26 18:49:13.941 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:49:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:49:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:14.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:14.776 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:49:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:14.776 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:49:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:49:14.776 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:49:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 43 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 61 KiB/s rd, 4.4 KiB/s wr, 87 op/s
Jan 26 13:49:15 np0005596060 nova_compute[247421]: 2026-01-26 18:49:15.455 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:15.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:16.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Jan 26 13:49:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Jan 26 13:49:16 np0005596060 ceph-mon[74267]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Jan 26 13:49:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 43 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 52 KiB/s rd, 3.0 KiB/s wr, 72 op/s
Jan 26 13:49:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:17.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:17 np0005596060 nova_compute[247421]: 2026-01-26 18:49:17.836 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:17 np0005596060 nova_compute[247421]: 2026-01-26 18:49:17.956 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:18.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:18 np0005596060 nova_compute[247421]: 2026-01-26 18:49:18.953 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 3.5 KiB/s wr, 77 op/s
Jan 26 13:49:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:19.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:20.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:20 np0005596060 nova_compute[247421]: 2026-01-26 18:49:20.458 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 KiB/s wr, 54 op/s
Jan 26 13:49:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:21.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:22.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 716 B/s wr, 17 op/s
Jan 26 13:49:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:23.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:23 np0005596060 nova_compute[247421]: 2026-01-26 18:49:23.956 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:24.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 716 B/s wr, 17 op/s
Jan 26 13:49:25 np0005596060 nova_compute[247421]: 2026-01-26 18:49:25.364 247428 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769453350.3624916, 4448620c-6ae7-4a36-98b9-cf616b071da7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 26 13:49:25 np0005596060 nova_compute[247421]: 2026-01-26 18:49:25.364 247428 INFO nova.compute.manager [-] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] VM Stopped (Lifecycle Event)#033[00m
Jan 26 13:49:25 np0005596060 nova_compute[247421]: 2026-01-26 18:49:25.383 247428 DEBUG nova.compute.manager [None req-1fb96ca3-e25d-45b9-b8b0-515e995ffea5 - - - - - -] [instance: 4448620c-6ae7-4a36-98b9-cf616b071da7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 26 13:49:25 np0005596060 nova_compute[247421]: 2026-01-26 18:49:25.460 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:25.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:26.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 2.5 KiB/s rd, 404 B/s wr, 4 op/s
Jan 26 13:49:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:27.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:28.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:28 np0005596060 nova_compute[247421]: 2026-01-26 18:49:28.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:49:28 np0005596060 nova_compute[247421]: 2026-01-26 18:49:28.959 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Jan 26 13:49:29 np0005596060 nova_compute[247421]: 2026-01-26 18:49:29.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:49:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:29.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:30.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:30 np0005596060 nova_compute[247421]: 2026-01-26 18:49:30.463 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:31.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:32.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:32 np0005596060 nova_compute[247421]: 2026-01-26 18:49:32.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:49:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:33.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:33 np0005596060 nova_compute[247421]: 2026-01-26 18:49:33.961 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:34.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:35 np0005596060 nova_compute[247421]: 2026-01-26 18:49:35.466 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:35 np0005596060 nova_compute[247421]: 2026-01-26 18:49:35.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:49:35 np0005596060 nova_compute[247421]: 2026-01-26 18:49:35.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:49:35 np0005596060 nova_compute[247421]: 2026-01-26 18:49:35.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:49:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:35.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:36.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:36 np0005596060 nova_compute[247421]: 2026-01-26 18:49:36.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:49:36 np0005596060 nova_compute[247421]: 2026-01-26 18:49:36.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:49:36 np0005596060 nova_compute[247421]: 2026-01-26 18:49:36.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:49:36 np0005596060 nova_compute[247421]: 2026-01-26 18:49:36.669 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:49:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:37 np0005596060 nova_compute[247421]: 2026-01-26 18:49:37.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:49:37 np0005596060 nova_compute[247421]: 2026-01-26 18:49:37.675 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:49:37 np0005596060 nova_compute[247421]: 2026-01-26 18:49:37.675 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:49:37 np0005596060 nova_compute[247421]: 2026-01-26 18:49:37.675 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:49:37 np0005596060 nova_compute[247421]: 2026-01-26 18:49:37.675 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:49:37 np0005596060 nova_compute[247421]: 2026-01-26 18:49:37.676 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:49:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:37.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:49:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/694710161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.102 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.296 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.298 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4589MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.298 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.298 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.359 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.360 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:49:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:38.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.545 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.962 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:38 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:49:38 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047043094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.986 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:49:38 np0005596060 nova_compute[247421]: 2026-01-26 18:49:38.990 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:49:39 np0005596060 nova_compute[247421]: 2026-01-26 18:49:39.007 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:49:39 np0005596060 nova_compute[247421]: 2026-01-26 18:49:39.029 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:49:39 np0005596060 nova_compute[247421]: 2026-01-26 18:49:39.029 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:49:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:39.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:40.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:40 np0005596060 nova_compute[247421]: 2026-01-26 18:49:40.512 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:41 np0005596060 nova_compute[247421]: 2026-01-26 18:49:41.029 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:49:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:41 np0005596060 nova_compute[247421]: 2026-01-26 18:49:41.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:49:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:41.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:42.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:43.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:43 np0005596060 podman[313037]: 2026-01-26 18:49:43.819295848 +0000 UTC m=+0.078478787 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 13:49:43 np0005596060 podman[313038]: 2026-01-26 18:49:43.82700538 +0000 UTC m=+0.080505738 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Jan 26 13:49:43 np0005596060 nova_compute[247421]: 2026-01-26 18:49:43.964 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:49:44
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 26 13:49:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:49:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:44.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:49:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:45 np0005596060 nova_compute[247421]: 2026-01-26 18:49:45.514 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:45.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:46.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:47.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:48 np0005596060 nova_compute[247421]: 2026-01-26 18:49:48.965 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:49.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:50.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:50 np0005596060 nova_compute[247421]: 2026-01-26 18:49:50.517 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:51.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:52.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:52 np0005596060 ovn_controller[148842]: 2026-01-26T18:49:52Z|00223|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Jan 26 13:49:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:53.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:53 np0005596060 nova_compute[247421]: 2026-01-26 18:49:53.966 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:54.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:55 np0005596060 podman[313304]: 2026-01-26 18:49:55.44300337 +0000 UTC m=+0.060119520 container exec ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:49:55 np0005596060 podman[313304]: 2026-01-26 18:49:55.55803697 +0000 UTC m=+0.175153110 container exec_died ebd9c630f9317cf59b6f3d070be4adfb83692a6ef435e0b95ea11db0c925756c (image=quay.io/ceph/ceph:v18, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 13:49:55 np0005596060 nova_compute[247421]: 2026-01-26 18:49:55.557 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:55.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:56 np0005596060 podman[313457]: 2026-01-26 18:49:56.118044889 +0000 UTC m=+0.057043814 container exec e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 13:49:56 np0005596060 podman[313457]: 2026-01-26 18:49:56.128562981 +0000 UTC m=+0.067561886 container exec_died e4e3f6b3b768afedc84ea429a820b85a458f611bdddbd4466c5fd2516505a512 (image=quay.io/ceph/haproxy:2.3, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-haproxy-rgw-default-compute-0-wyazzh)
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:49:56 np0005596060 podman[313522]: 2026-01-26 18:49:56.321308569 +0000 UTC m=+0.047947187 container exec 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=keepalived for Ceph, version=2.2.4, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Jan 26 13:49:56 np0005596060 podman[313522]: 2026-01-26 18:49:56.333422051 +0000 UTC m=+0.060060679 container exec_died 4a4512c041b37c7205a7b96c32911a5c28ec34ae0e279f482f4c6069eaef60b5 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-d4cd1917-5876-51b6-bc64-65a16199754d-keepalived-rgw-default-compute-0-erukyj, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.buildah.version=1.28.2, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, release=1793, description=keepalived for Ceph, distribution-scope=public, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20)
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:49:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:56.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:49:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:49:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:49:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 27f86075-30fb-4758-b1c7-5283901de327 does not exist
Jan 26 13:49:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev b6c4b171-3208-4bb5-91c2-264aebdd665d does not exist
Jan 26 13:49:57 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e883308e-a910-484f-a0be-53b8905290d5 does not exist
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:49:57 np0005596060 podman[313824]: 2026-01-26 18:49:57.627305186 +0000 UTC m=+0.038238275 container create a8d2424ac84d570ad03f08c2c1fa25c111aac54dedc5b09494f4541395820294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_maxwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 13:49:57 np0005596060 systemd[1]: Started libpod-conmon-a8d2424ac84d570ad03f08c2c1fa25c111aac54dedc5b09494f4541395820294.scope.
Jan 26 13:49:57 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:49:57 np0005596060 podman[313824]: 2026-01-26 18:49:57.610535918 +0000 UTC m=+0.021469027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:49:57 np0005596060 podman[313824]: 2026-01-26 18:49:57.706965683 +0000 UTC m=+0.117898792 container init a8d2424ac84d570ad03f08c2c1fa25c111aac54dedc5b09494f4541395820294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_maxwell, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:49:57 np0005596060 podman[313824]: 2026-01-26 18:49:57.713953767 +0000 UTC m=+0.124886866 container start a8d2424ac84d570ad03f08c2c1fa25c111aac54dedc5b09494f4541395820294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_maxwell, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:49:57 np0005596060 podman[313824]: 2026-01-26 18:49:57.717369473 +0000 UTC m=+0.128302562 container attach a8d2424ac84d570ad03f08c2c1fa25c111aac54dedc5b09494f4541395820294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:49:57 np0005596060 angry_maxwell[313840]: 167 167
Jan 26 13:49:57 np0005596060 systemd[1]: libpod-a8d2424ac84d570ad03f08c2c1fa25c111aac54dedc5b09494f4541395820294.scope: Deactivated successfully.
Jan 26 13:49:57 np0005596060 podman[313824]: 2026-01-26 18:49:57.720392118 +0000 UTC m=+0.131325207 container died a8d2424ac84d570ad03f08c2c1fa25c111aac54dedc5b09494f4541395820294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:49:57 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:49:57 np0005596060 systemd[1]: var-lib-containers-storage-overlay-817c0e1b312578204dc31b66db008d7c77b09a2763d18fa3944c757271493e8e-merged.mount: Deactivated successfully.
Jan 26 13:49:57 np0005596060 podman[313824]: 2026-01-26 18:49:57.763517094 +0000 UTC m=+0.174450183 container remove a8d2424ac84d570ad03f08c2c1fa25c111aac54dedc5b09494f4541395820294 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:49:57 np0005596060 systemd[1]: libpod-conmon-a8d2424ac84d570ad03f08c2c1fa25c111aac54dedc5b09494f4541395820294.scope: Deactivated successfully.
Jan 26 13:49:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:49:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:57.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:49:57 np0005596060 podman[313862]: 2026-01-26 18:49:57.93253479 +0000 UTC m=+0.047499946 container create cfde6eece9bb72cd67aee624fc15f953b14e35c276f25ea9cf924f70805df99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclean, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:49:57 np0005596060 systemd[1]: Started libpod-conmon-cfde6eece9bb72cd67aee624fc15f953b14e35c276f25ea9cf924f70805df99b.scope.
Jan 26 13:49:58 np0005596060 podman[313862]: 2026-01-26 18:49:57.90889368 +0000 UTC m=+0.023858876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:49:58 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:49:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad6a06b363b3d5c43b2aad27551dd2e5dc2b6a61a0185c886963bf36bb009a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:49:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad6a06b363b3d5c43b2aad27551dd2e5dc2b6a61a0185c886963bf36bb009a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:49:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad6a06b363b3d5c43b2aad27551dd2e5dc2b6a61a0185c886963bf36bb009a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:49:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad6a06b363b3d5c43b2aad27551dd2e5dc2b6a61a0185c886963bf36bb009a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:49:58 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad6a06b363b3d5c43b2aad27551dd2e5dc2b6a61a0185c886963bf36bb009a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:49:58 np0005596060 podman[313862]: 2026-01-26 18:49:58.029706704 +0000 UTC m=+0.144671900 container init cfde6eece9bb72cd67aee624fc15f953b14e35c276f25ea9cf924f70805df99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:49:58 np0005596060 podman[313862]: 2026-01-26 18:49:58.036061872 +0000 UTC m=+0.151027018 container start cfde6eece9bb72cd67aee624fc15f953b14e35c276f25ea9cf924f70805df99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclean, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 13:49:58 np0005596060 podman[313862]: 2026-01-26 18:49:58.040159334 +0000 UTC m=+0.155124530 container attach cfde6eece9bb72cd67aee624fc15f953b14e35c276f25ea9cf924f70805df99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:49:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:49:58.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:58 np0005596060 keen_mclean[313878]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:49:58 np0005596060 keen_mclean[313878]: --> relative data size: 1.0
Jan 26 13:49:58 np0005596060 keen_mclean[313878]: --> All data devices are unavailable
Jan 26 13:49:58 np0005596060 systemd[1]: libpod-cfde6eece9bb72cd67aee624fc15f953b14e35c276f25ea9cf924f70805df99b.scope: Deactivated successfully.
Jan 26 13:49:58 np0005596060 podman[313862]: 2026-01-26 18:49:58.861605643 +0000 UTC m=+0.976570789 container died cfde6eece9bb72cd67aee624fc15f953b14e35c276f25ea9cf924f70805df99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 26 13:49:58 np0005596060 systemd[1]: var-lib-containers-storage-overlay-bad6a06b363b3d5c43b2aad27551dd2e5dc2b6a61a0185c886963bf36bb009a4-merged.mount: Deactivated successfully.
Jan 26 13:49:58 np0005596060 podman[313862]: 2026-01-26 18:49:58.921801605 +0000 UTC m=+1.036766751 container remove cfde6eece9bb72cd67aee624fc15f953b14e35c276f25ea9cf924f70805df99b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:49:58 np0005596060 systemd[1]: libpod-conmon-cfde6eece9bb72cd67aee624fc15f953b14e35c276f25ea9cf924f70805df99b.scope: Deactivated successfully.
Jan 26 13:49:58 np0005596060 nova_compute[247421]: 2026-01-26 18:49:58.967 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:49:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:49:59 np0005596060 podman[314049]: 2026-01-26 18:49:59.54579072 +0000 UTC m=+0.040035680 container create ed27f96939f0dd6ca30746b06857879c0f4652dc0a34f86b4eea58ded4d38f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 13:49:59 np0005596060 systemd[1]: Started libpod-conmon-ed27f96939f0dd6ca30746b06857879c0f4652dc0a34f86b4eea58ded4d38f88.scope.
Jan 26 13:49:59 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:49:59 np0005596060 podman[314049]: 2026-01-26 18:49:59.623058577 +0000 UTC m=+0.117303557 container init ed27f96939f0dd6ca30746b06857879c0f4652dc0a34f86b4eea58ded4d38f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 26 13:49:59 np0005596060 podman[314049]: 2026-01-26 18:49:59.529589836 +0000 UTC m=+0.023834816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:49:59 np0005596060 podman[314049]: 2026-01-26 18:49:59.63038613 +0000 UTC m=+0.124631090 container start ed27f96939f0dd6ca30746b06857879c0f4652dc0a34f86b4eea58ded4d38f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:49:59 np0005596060 podman[314049]: 2026-01-26 18:49:59.633693123 +0000 UTC m=+0.127938113 container attach ed27f96939f0dd6ca30746b06857879c0f4652dc0a34f86b4eea58ded4d38f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:49:59 np0005596060 stupefied_shockley[314067]: 167 167
Jan 26 13:49:59 np0005596060 systemd[1]: libpod-ed27f96939f0dd6ca30746b06857879c0f4652dc0a34f86b4eea58ded4d38f88.scope: Deactivated successfully.
Jan 26 13:49:59 np0005596060 podman[314049]: 2026-01-26 18:49:59.636414491 +0000 UTC m=+0.130659451 container died ed27f96939f0dd6ca30746b06857879c0f4652dc0a34f86b4eea58ded4d38f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:49:59 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6bbcd2d9582fe04556f5b19a920e34f160993682ed2825bb825557fcad868ae5-merged.mount: Deactivated successfully.
Jan 26 13:49:59 np0005596060 podman[314049]: 2026-01-26 18:49:59.671800573 +0000 UTC m=+0.166045533 container remove ed27f96939f0dd6ca30746b06857879c0f4652dc0a34f86b4eea58ded4d38f88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 13:49:59 np0005596060 systemd[1]: libpod-conmon-ed27f96939f0dd6ca30746b06857879c0f4652dc0a34f86b4eea58ded4d38f88.scope: Deactivated successfully.
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.763897) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453399763992, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1177, "num_deletes": 254, "total_data_size": 1844572, "memory_usage": 1880792, "flush_reason": "Manual Compaction"}
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453399775147, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1811570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49557, "largest_seqno": 50733, "table_properties": {"data_size": 1805789, "index_size": 3112, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12662, "raw_average_key_size": 20, "raw_value_size": 1794103, "raw_average_value_size": 2903, "num_data_blocks": 136, "num_entries": 618, "num_filter_entries": 618, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769453307, "oldest_key_time": 1769453307, "file_creation_time": 1769453399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 11329 microseconds, and 5240 cpu microseconds.
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.775238) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1811570 bytes OK
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.775257) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.776736) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.776750) EVENT_LOG_v1 {"time_micros": 1769453399776745, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.776767) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1839247, prev total WAL file size 1839247, number of live WAL files 2.
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.777564) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1769KB)], [110(10045KB)]
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453399777661, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 12098080, "oldest_snapshot_seqno": -1}
Jan 26 13:49:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:49:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:49:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:49:59.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7508 keys, 10149260 bytes, temperature: kUnknown
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453399846345, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 10149260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10101946, "index_size": 27426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 194078, "raw_average_key_size": 25, "raw_value_size": 9970189, "raw_average_value_size": 1327, "num_data_blocks": 1085, "num_entries": 7508, "num_filter_entries": 7508, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769453399, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.846594) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 10149260 bytes
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.848414) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.9 rd, 147.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.8 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 8035, records dropped: 527 output_compression: NoCompression
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.848430) EVENT_LOG_v1 {"time_micros": 1769453399848422, "job": 66, "event": "compaction_finished", "compaction_time_micros": 68783, "compaction_time_cpu_micros": 23619, "output_level": 6, "num_output_files": 1, "total_output_size": 10149260, "num_input_records": 8035, "num_output_records": 7508, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453399848893, "job": 66, "event": "table_file_deletion", "file_number": 112}
Jan 26 13:49:59 np0005596060 podman[314090]: 2026-01-26 18:49:59.849059935 +0000 UTC m=+0.053370793 container create 21f9815f5cd1d0c8a851a68575e58bdc7e99f085fb38744c4c7a9d643ec8317d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453399850468, "job": 66, "event": "table_file_deletion", "file_number": 110}
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.777438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.850516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.850520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.850521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.850523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:49:59 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:49:59.850524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:49:59 np0005596060 systemd[1]: Started libpod-conmon-21f9815f5cd1d0c8a851a68575e58bdc7e99f085fb38744c4c7a9d643ec8317d.scope.
Jan 26 13:49:59 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:49:59 np0005596060 podman[314090]: 2026-01-26 18:49:59.818267177 +0000 UTC m=+0.022578055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:49:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db116aadaedf5b26b975b4a11c96aabcce974359244c7f78554aca6168319ad5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:49:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db116aadaedf5b26b975b4a11c96aabcce974359244c7f78554aca6168319ad5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:49:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db116aadaedf5b26b975b4a11c96aabcce974359244c7f78554aca6168319ad5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:49:59 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db116aadaedf5b26b975b4a11c96aabcce974359244c7f78554aca6168319ad5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:49:59 np0005596060 podman[314090]: 2026-01-26 18:49:59.925896331 +0000 UTC m=+0.130207189 container init 21f9815f5cd1d0c8a851a68575e58bdc7e99f085fb38744c4c7a9d643ec8317d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:49:59 np0005596060 podman[314090]: 2026-01-26 18:49:59.934811994 +0000 UTC m=+0.139122852 container start 21f9815f5cd1d0c8a851a68575e58bdc7e99f085fb38744c4c7a9d643ec8317d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 13:49:59 np0005596060 podman[314090]: 2026-01-26 18:49:59.939316646 +0000 UTC m=+0.143627524 container attach 21f9815f5cd1d0c8a851a68575e58bdc7e99f085fb38744c4c7a9d643ec8317d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 13:50:00 np0005596060 ceph-mon[74267]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 26 13:50:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:00.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:00 np0005596060 nova_compute[247421]: 2026-01-26 18:50:00.561 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]: {
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:    "1": [
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:        {
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "devices": [
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "/dev/loop3"
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            ],
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "lv_name": "ceph_lv0",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "lv_size": "7511998464",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "name": "ceph_lv0",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "tags": {
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.cluster_name": "ceph",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.crush_device_class": "",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.encrypted": "0",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.osd_id": "1",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.type": "block",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:                "ceph.vdo": "0"
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            },
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "type": "block",
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:            "vg_name": "ceph_vg0"
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:        }
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]:    ]
Jan 26 13:50:00 np0005596060 beautiful_sanderson[314107]: }
Jan 26 13:50:00 np0005596060 systemd[1]: libpod-21f9815f5cd1d0c8a851a68575e58bdc7e99f085fb38744c4c7a9d643ec8317d.scope: Deactivated successfully.
Jan 26 13:50:00 np0005596060 podman[314090]: 2026-01-26 18:50:00.705089358 +0000 UTC m=+0.909400246 container died 21f9815f5cd1d0c8a851a68575e58bdc7e99f085fb38744c4c7a9d643ec8317d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 26 13:50:00 np0005596060 systemd[1]: var-lib-containers-storage-overlay-db116aadaedf5b26b975b4a11c96aabcce974359244c7f78554aca6168319ad5-merged.mount: Deactivated successfully.
Jan 26 13:50:00 np0005596060 podman[314090]: 2026-01-26 18:50:00.786257092 +0000 UTC m=+0.990567950 container remove 21f9815f5cd1d0c8a851a68575e58bdc7e99f085fb38744c4c7a9d643ec8317d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_sanderson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:50:00 np0005596060 systemd[1]: libpod-conmon-21f9815f5cd1d0c8a851a68575e58bdc7e99f085fb38744c4c7a9d643ec8317d.scope: Deactivated successfully.
Jan 26 13:50:01 np0005596060 ceph-mon[74267]: overall HEALTH_OK
Jan 26 13:50:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:01 np0005596060 podman[314267]: 2026-01-26 18:50:01.500783726 +0000 UTC m=+0.043165388 container create a40776fc8de9cf7401e01620f454421e053b79af7a721ad5e6a1ca9a2f861bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tharp, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:50:01 np0005596060 systemd[1]: Started libpod-conmon-a40776fc8de9cf7401e01620f454421e053b79af7a721ad5e6a1ca9a2f861bc8.scope.
Jan 26 13:50:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:50:01 np0005596060 podman[314267]: 2026-01-26 18:50:01.4797054 +0000 UTC m=+0.022087092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:50:01 np0005596060 podman[314267]: 2026-01-26 18:50:01.581710374 +0000 UTC m=+0.124092056 container init a40776fc8de9cf7401e01620f454421e053b79af7a721ad5e6a1ca9a2f861bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tharp, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:50:01 np0005596060 podman[314267]: 2026-01-26 18:50:01.587345935 +0000 UTC m=+0.129727597 container start a40776fc8de9cf7401e01620f454421e053b79af7a721ad5e6a1ca9a2f861bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 13:50:01 np0005596060 podman[314267]: 2026-01-26 18:50:01.590506994 +0000 UTC m=+0.132888676 container attach a40776fc8de9cf7401e01620f454421e053b79af7a721ad5e6a1ca9a2f861bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 26 13:50:01 np0005596060 systemd[1]: libpod-a40776fc8de9cf7401e01620f454421e053b79af7a721ad5e6a1ca9a2f861bc8.scope: Deactivated successfully.
Jan 26 13:50:01 np0005596060 compassionate_tharp[314283]: 167 167
Jan 26 13:50:01 np0005596060 conmon[314283]: conmon a40776fc8de9cf7401e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a40776fc8de9cf7401e01620f454421e053b79af7a721ad5e6a1ca9a2f861bc8.scope/container/memory.events
Jan 26 13:50:01 np0005596060 podman[314267]: 2026-01-26 18:50:01.59356909 +0000 UTC m=+0.135950752 container died a40776fc8de9cf7401e01620f454421e053b79af7a721ad5e6a1ca9a2f861bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tharp, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 13:50:01 np0005596060 systemd[1]: var-lib-containers-storage-overlay-d85c9e9666df06b85139f57e3bb79a0ce7147df9b39ea4d1c3f7bc22f1896e1a-merged.mount: Deactivated successfully.
Jan 26 13:50:01 np0005596060 podman[314267]: 2026-01-26 18:50:01.635071585 +0000 UTC m=+0.177453257 container remove a40776fc8de9cf7401e01620f454421e053b79af7a721ad5e6a1ca9a2f861bc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:50:01 np0005596060 systemd[1]: libpod-conmon-a40776fc8de9cf7401e01620f454421e053b79af7a721ad5e6a1ca9a2f861bc8.scope: Deactivated successfully.
Jan 26 13:50:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:01.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:01 np0005596060 podman[314308]: 2026-01-26 18:50:01.835308869 +0000 UTC m=+0.057487515 container create c5453519ecddf6106e5b005af27e6a230c39f2fe878fc00b9272a9288466751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:50:01 np0005596060 systemd[1]: Started libpod-conmon-c5453519ecddf6106e5b005af27e6a230c39f2fe878fc00b9272a9288466751b.scope.
Jan 26 13:50:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:01 np0005596060 podman[314308]: 2026-01-26 18:50:01.812627093 +0000 UTC m=+0.034805769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:50:01 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:50:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fa7f9b51c41accb8e70d30afb53096c2fd7656cbd4ba38e07c65e4913845c74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:50:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fa7f9b51c41accb8e70d30afb53096c2fd7656cbd4ba38e07c65e4913845c74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:50:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fa7f9b51c41accb8e70d30afb53096c2fd7656cbd4ba38e07c65e4913845c74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:50:01 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fa7f9b51c41accb8e70d30afb53096c2fd7656cbd4ba38e07c65e4913845c74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:50:01 np0005596060 podman[314308]: 2026-01-26 18:50:01.933873428 +0000 UTC m=+0.156052094 container init c5453519ecddf6106e5b005af27e6a230c39f2fe878fc00b9272a9288466751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:50:01 np0005596060 podman[314308]: 2026-01-26 18:50:01.941689693 +0000 UTC m=+0.163868339 container start c5453519ecddf6106e5b005af27e6a230c39f2fe878fc00b9272a9288466751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 13:50:01 np0005596060 podman[314308]: 2026-01-26 18:50:01.945316133 +0000 UTC m=+0.167494789 container attach c5453519ecddf6106e5b005af27e6a230c39f2fe878fc00b9272a9288466751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 13:50:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:02.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:02 np0005596060 boring_kare[314324]: {
Jan 26 13:50:02 np0005596060 boring_kare[314324]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:50:02 np0005596060 boring_kare[314324]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:50:02 np0005596060 boring_kare[314324]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:50:02 np0005596060 boring_kare[314324]:        "osd_id": 1,
Jan 26 13:50:02 np0005596060 boring_kare[314324]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:50:02 np0005596060 boring_kare[314324]:        "type": "bluestore"
Jan 26 13:50:02 np0005596060 boring_kare[314324]:    }
Jan 26 13:50:02 np0005596060 boring_kare[314324]: }
Jan 26 13:50:02 np0005596060 systemd[1]: libpod-c5453519ecddf6106e5b005af27e6a230c39f2fe878fc00b9272a9288466751b.scope: Deactivated successfully.
Jan 26 13:50:02 np0005596060 podman[314308]: 2026-01-26 18:50:02.72573889 +0000 UTC m=+0.947917536 container died c5453519ecddf6106e5b005af27e6a230c39f2fe878fc00b9272a9288466751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 13:50:02 np0005596060 systemd[1]: var-lib-containers-storage-overlay-3fa7f9b51c41accb8e70d30afb53096c2fd7656cbd4ba38e07c65e4913845c74-merged.mount: Deactivated successfully.
Jan 26 13:50:02 np0005596060 podman[314308]: 2026-01-26 18:50:02.78106117 +0000 UTC m=+1.003239816 container remove c5453519ecddf6106e5b005af27e6a230c39f2fe878fc00b9272a9288466751b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:50:02 np0005596060 systemd[1]: libpod-conmon-c5453519ecddf6106e5b005af27e6a230c39f2fe878fc00b9272a9288466751b.scope: Deactivated successfully.
Jan 26 13:50:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:50:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:50:02 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:50:02 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:50:02 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 11d8bd92-6eba-4f52-86a7-a4d409822cb9 does not exist
Jan 26 13:50:02 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev e44e06af-8198-4ea9-8c6d-447cda4faca8 does not exist
Jan 26 13:50:02 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 00285fe0-1117-4396-8e62-56acdd4deef4 does not exist
Jan 26 13:50:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:03.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:03 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:50:03 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:50:03 np0005596060 nova_compute[247421]: 2026-01-26 18:50:03.969 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:50:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:50:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:04.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:05 np0005596060 nova_compute[247421]: 2026-01-26 18:50:05.563 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:05.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:06.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:07.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:08.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:08 np0005596060 nova_compute[247421]: 2026-01-26 18:50:08.970 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:09.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:50:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:10.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:50:10 np0005596060 nova_compute[247421]: 2026-01-26 18:50:10.606 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:11.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:50:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:12.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:50:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:13.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:13 np0005596060 nova_compute[247421]: 2026-01-26 18:50:13.973 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:50:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:50:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:14.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:50:14.776 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:50:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:50:14.776 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:50:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:50:14.777 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:50:14 np0005596060 podman[314464]: 2026-01-26 18:50:14.832136321 +0000 UTC m=+0.084562020 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 26 13:50:14 np0005596060 podman[314465]: 2026-01-26 18:50:14.834938691 +0000 UTC m=+0.087498854 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 13:50:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:15 np0005596060 nova_compute[247421]: 2026-01-26 18:50:15.610 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:15.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:50:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:16.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:50:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:17.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:50:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:18.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:50:18 np0005596060 nova_compute[247421]: 2026-01-26 18:50:18.974 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:19.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:20.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:20 np0005596060 nova_compute[247421]: 2026-01-26 18:50:20.612 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:21.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:22.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:23.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:23 np0005596060 nova_compute[247421]: 2026-01-26 18:50:23.975 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:24.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:25 np0005596060 nova_compute[247421]: 2026-01-26 18:50:25.614 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:25.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:26.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.903458) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453426903495, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 482, "num_deletes": 256, "total_data_size": 484241, "memory_usage": 493472, "flush_reason": "Manual Compaction"}
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453426908983, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 479631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50734, "largest_seqno": 51215, "table_properties": {"data_size": 476890, "index_size": 777, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6417, "raw_average_key_size": 18, "raw_value_size": 471346, "raw_average_value_size": 1350, "num_data_blocks": 34, "num_entries": 349, "num_filter_entries": 349, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769453400, "oldest_key_time": 1769453400, "file_creation_time": 1769453426, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 5587 microseconds, and 2410 cpu microseconds.
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.909043) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 479631 bytes OK
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.909062) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.910561) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.910577) EVENT_LOG_v1 {"time_micros": 1769453426910572, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.910595) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 481415, prev total WAL file size 481415, number of live WAL files 2.
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.911063) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373630' seq:72057594037927935, type:22 .. '6C6F676D0032303132' seq:0, type:0; will stop at (end)
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(468KB)], [113(9911KB)]
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453426911123, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10628891, "oldest_snapshot_seqno": -1}
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7333 keys, 10497636 bytes, temperature: kUnknown
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453426992097, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 10497636, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10450627, "index_size": 27569, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 191367, "raw_average_key_size": 26, "raw_value_size": 10320984, "raw_average_value_size": 1407, "num_data_blocks": 1089, "num_entries": 7333, "num_filter_entries": 7333, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769453426, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.992445) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 10497636 bytes
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.993866) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 131.0 rd, 129.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.7 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(44.0) write-amplify(21.9) OK, records in: 7857, records dropped: 524 output_compression: NoCompression
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.993885) EVENT_LOG_v1 {"time_micros": 1769453426993877, "job": 68, "event": "compaction_finished", "compaction_time_micros": 81133, "compaction_time_cpu_micros": 27715, "output_level": 6, "num_output_files": 1, "total_output_size": 10497636, "num_input_records": 7857, "num_output_records": 7333, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453426994057, "job": 68, "event": "table_file_deletion", "file_number": 115}
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453426996152, "job": 68, "event": "table_file_deletion", "file_number": 113}
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.910989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.996334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.996341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.996471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.996475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:50:26 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:50:26.996476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:50:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:27.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:28 np0005596060 nova_compute[247421]: 2026-01-26 18:50:28.978 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:29 np0005596060 nova_compute[247421]: 2026-01-26 18:50:29.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:50:29 np0005596060 nova_compute[247421]: 2026-01-26 18:50:29.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:50:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:29.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:30.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:30 np0005596060 nova_compute[247421]: 2026-01-26 18:50:30.616 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:50:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:31.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:50:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:32.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:33 np0005596060 nova_compute[247421]: 2026-01-26 18:50:33.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:50:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:33.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:33 np0005596060 nova_compute[247421]: 2026-01-26 18:50:33.980 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:34.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:35 np0005596060 nova_compute[247421]: 2026-01-26 18:50:35.619 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:35 np0005596060 nova_compute[247421]: 2026-01-26 18:50:35.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:50:35 np0005596060 nova_compute[247421]: 2026-01-26 18:50:35.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:50:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:35.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:36.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:37 np0005596060 nova_compute[247421]: 2026-01-26 18:50:37.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:50:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:37.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:50:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:50:38 np0005596060 nova_compute[247421]: 2026-01-26 18:50:38.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:50:38 np0005596060 nova_compute[247421]: 2026-01-26 18:50:38.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:50:38 np0005596060 nova_compute[247421]: 2026-01-26 18:50:38.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:50:38 np0005596060 nova_compute[247421]: 2026-01-26 18:50:38.672 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:50:38 np0005596060 nova_compute[247421]: 2026-01-26 18:50:38.982 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:39 np0005596060 nova_compute[247421]: 2026-01-26 18:50:39.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:50:39 np0005596060 nova_compute[247421]: 2026-01-26 18:50:39.676 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:50:39 np0005596060 nova_compute[247421]: 2026-01-26 18:50:39.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:50:39 np0005596060 nova_compute[247421]: 2026-01-26 18:50:39.677 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:50:39 np0005596060 nova_compute[247421]: 2026-01-26 18:50:39.678 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:50:39 np0005596060 nova_compute[247421]: 2026-01-26 18:50:39.678 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:50:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:39.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:50:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1938669134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.126 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.322 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.324 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4606MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.325 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.325 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.395 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.396 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.412 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:50:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:50:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/612972663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:50:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:50:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/612972663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:50:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:40.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.622 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:50:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4147498086' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.850 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.855 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.871 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.873 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:50:40 np0005596060 nova_compute[247421]: 2026-01-26 18:50:40.873 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:50:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:42.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:42 np0005596060 nova_compute[247421]: 2026-01-26 18:50:42.874 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:50:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:43 np0005596060 nova_compute[247421]: 2026-01-26 18:50:43.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:50:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:43.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:43 np0005596060 nova_compute[247421]: 2026-01-26 18:50:43.985 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:50:44
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'images']
Jan 26 13:50:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:50:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:44.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:50:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:45 np0005596060 nova_compute[247421]: 2026-01-26 18:50:45.626 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:45 np0005596060 podman[314619]: 2026-01-26 18:50:45.814226574 +0000 UTC m=+0.070704445 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 13:50:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:50:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:45.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:50:45 np0005596060 podman[314620]: 2026-01-26 18:50:45.889415289 +0000 UTC m=+0.133656435 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 13:50:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:46.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:47.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:48.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:48 np0005596060 nova_compute[247421]: 2026-01-26 18:50:48.985 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:49.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:50.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:50 np0005596060 nova_compute[247421]: 2026-01-26 18:50:50.628 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:50 np0005596060 nova_compute[247421]: 2026-01-26 18:50:50.646 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:50:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:50:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:51.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:50:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:50:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:52.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:50:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:50:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:53.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:50:53 np0005596060 nova_compute[247421]: 2026-01-26 18:50:53.989 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:54.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:55 np0005596060 nova_compute[247421]: 2026-01-26 18:50:55.630 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:50:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:55.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:50:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:56.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:50:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:50:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:50:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:57.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:50:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:50:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:50:58.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:50:58 np0005596060 nova_compute[247421]: 2026-01-26 18:50:58.991 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:50:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:50:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:50:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:50:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:50:59.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:00.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:00 np0005596060 nova_compute[247421]: 2026-01-26 18:51:00.633 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:01.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:02.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:03.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:03 np0005596060 nova_compute[247421]: 2026-01-26 18:51:03.992 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7d71efd3-beb1-4016-a090-57d02d7c87d7 does not exist
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev a14c2a7b-c81a-435e-85ce-c775fe4d0605 does not exist
Jan 26 13:51:04 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 4c268800-ec79-4bd9-abf1-ef5b00bd6e38 does not exist
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:51:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:04.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:51:04 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:51:04 np0005596060 podman[315000]: 2026-01-26 18:51:04.761859019 +0000 UTC m=+0.021987219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:51:04 np0005596060 podman[315000]: 2026-01-26 18:51:04.959670344 +0000 UTC m=+0.219798524 container create 18882a8a0230411d8b33f037029d9013b9ef050f28974ff025f21485482ef9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mestorf, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 13:51:05 np0005596060 systemd[1]: Started libpod-conmon-18882a8a0230411d8b33f037029d9013b9ef050f28974ff025f21485482ef9a4.scope.
Jan 26 13:51:05 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:51:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:05 np0005596060 podman[315000]: 2026-01-26 18:51:05.206670245 +0000 UTC m=+0.466798435 container init 18882a8a0230411d8b33f037029d9013b9ef050f28974ff025f21485482ef9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:51:05 np0005596060 podman[315000]: 2026-01-26 18:51:05.215631689 +0000 UTC m=+0.475759869 container start 18882a8a0230411d8b33f037029d9013b9ef050f28974ff025f21485482ef9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mestorf, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:51:05 np0005596060 systemd[1]: libpod-18882a8a0230411d8b33f037029d9013b9ef050f28974ff025f21485482ef9a4.scope: Deactivated successfully.
Jan 26 13:51:05 np0005596060 elated_mestorf[315016]: 167 167
Jan 26 13:51:05 np0005596060 conmon[315016]: conmon 18882a8a0230411d8b33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18882a8a0230411d8b33f037029d9013b9ef050f28974ff025f21485482ef9a4.scope/container/memory.events
Jan 26 13:51:05 np0005596060 podman[315000]: 2026-01-26 18:51:05.23052442 +0000 UTC m=+0.490652610 container attach 18882a8a0230411d8b33f037029d9013b9ef050f28974ff025f21485482ef9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:51:05 np0005596060 podman[315000]: 2026-01-26 18:51:05.231491174 +0000 UTC m=+0.491619364 container died 18882a8a0230411d8b33f037029d9013b9ef050f28974ff025f21485482ef9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mestorf, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 26 13:51:05 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e84a85e2f7d341c46c92a3c4f5e40f7269cb40c48548284ac286eb4350f4346f-merged.mount: Deactivated successfully.
Jan 26 13:51:05 np0005596060 podman[315000]: 2026-01-26 18:51:05.531667902 +0000 UTC m=+0.791796082 container remove 18882a8a0230411d8b33f037029d9013b9ef050f28974ff025f21485482ef9a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:51:05 np0005596060 systemd[1]: libpod-conmon-18882a8a0230411d8b33f037029d9013b9ef050f28974ff025f21485482ef9a4.scope: Deactivated successfully.
Jan 26 13:51:05 np0005596060 nova_compute[247421]: 2026-01-26 18:51:05.635 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:05 np0005596060 podman[315043]: 2026-01-26 18:51:05.676231458 +0000 UTC m=+0.024894532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:51:05 np0005596060 podman[315043]: 2026-01-26 18:51:05.786776506 +0000 UTC m=+0.135439540 container create 04ff2e655b0781a7aa48dab7017b6aee46bdc4f6e6d4a86d85ad146037d71145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:51:05 np0005596060 systemd[1]: Started libpod-conmon-04ff2e655b0781a7aa48dab7017b6aee46bdc4f6e6d4a86d85ad146037d71145.scope.
Jan 26 13:51:05 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:51:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d7803283ecb4058ac616a6e65063ee7558fc85e5df9fdb5aab7c87acb5c113/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d7803283ecb4058ac616a6e65063ee7558fc85e5df9fdb5aab7c87acb5c113/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d7803283ecb4058ac616a6e65063ee7558fc85e5df9fdb5aab7c87acb5c113/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d7803283ecb4058ac616a6e65063ee7558fc85e5df9fdb5aab7c87acb5c113/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:05 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d7803283ecb4058ac616a6e65063ee7558fc85e5df9fdb5aab7c87acb5c113/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:05.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:05 np0005596060 podman[315043]: 2026-01-26 18:51:05.956155591 +0000 UTC m=+0.304818685 container init 04ff2e655b0781a7aa48dab7017b6aee46bdc4f6e6d4a86d85ad146037d71145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_northcutt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:51:05 np0005596060 podman[315043]: 2026-01-26 18:51:05.964869708 +0000 UTC m=+0.313532792 container start 04ff2e655b0781a7aa48dab7017b6aee46bdc4f6e6d4a86d85ad146037d71145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_northcutt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 26 13:51:05 np0005596060 podman[315043]: 2026-01-26 18:51:05.96937033 +0000 UTC m=+0.318033474 container attach 04ff2e655b0781a7aa48dab7017b6aee46bdc4f6e6d4a86d85ad146037d71145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:51:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:06.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:06 np0005596060 heuristic_northcutt[315059]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:51:06 np0005596060 heuristic_northcutt[315059]: --> relative data size: 1.0
Jan 26 13:51:06 np0005596060 heuristic_northcutt[315059]: --> All data devices are unavailable
Jan 26 13:51:06 np0005596060 systemd[1]: libpod-04ff2e655b0781a7aa48dab7017b6aee46bdc4f6e6d4a86d85ad146037d71145.scope: Deactivated successfully.
Jan 26 13:51:06 np0005596060 podman[315043]: 2026-01-26 18:51:06.793353103 +0000 UTC m=+1.142016217 container died 04ff2e655b0781a7aa48dab7017b6aee46bdc4f6e6d4a86d85ad146037d71145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_northcutt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:51:06 np0005596060 systemd[1]: var-lib-containers-storage-overlay-31d7803283ecb4058ac616a6e65063ee7558fc85e5df9fdb5aab7c87acb5c113-merged.mount: Deactivated successfully.
Jan 26 13:51:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:07 np0005596060 podman[315043]: 2026-01-26 18:51:07.016391877 +0000 UTC m=+1.365054961 container remove 04ff2e655b0781a7aa48dab7017b6aee46bdc4f6e6d4a86d85ad146037d71145 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:51:07 np0005596060 systemd[1]: libpod-conmon-04ff2e655b0781a7aa48dab7017b6aee46bdc4f6e6d4a86d85ad146037d71145.scope: Deactivated successfully.
Jan 26 13:51:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:07 np0005596060 podman[315280]: 2026-01-26 18:51:07.634944076 +0000 UTC m=+0.045221119 container create 5dc934a3c636901d9511c07f4267368c0e188cd3ffe3feb08ef1192f65a3e435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 26 13:51:07 np0005596060 systemd[1]: Started libpod-conmon-5dc934a3c636901d9511c07f4267368c0e188cd3ffe3feb08ef1192f65a3e435.scope.
Jan 26 13:51:07 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:51:07 np0005596060 podman[315280]: 2026-01-26 18:51:07.616591928 +0000 UTC m=+0.026869011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:51:07 np0005596060 podman[315280]: 2026-01-26 18:51:07.72170104 +0000 UTC m=+0.131978113 container init 5dc934a3c636901d9511c07f4267368c0e188cd3ffe3feb08ef1192f65a3e435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 13:51:07 np0005596060 podman[315280]: 2026-01-26 18:51:07.729550646 +0000 UTC m=+0.139827699 container start 5dc934a3c636901d9511c07f4267368c0e188cd3ffe3feb08ef1192f65a3e435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:51:07 np0005596060 podman[315280]: 2026-01-26 18:51:07.732772796 +0000 UTC m=+0.143049879 container attach 5dc934a3c636901d9511c07f4267368c0e188cd3ffe3feb08ef1192f65a3e435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:51:07 np0005596060 nice_hofstadter[315296]: 167 167
Jan 26 13:51:07 np0005596060 systemd[1]: libpod-5dc934a3c636901d9511c07f4267368c0e188cd3ffe3feb08ef1192f65a3e435.scope: Deactivated successfully.
Jan 26 13:51:07 np0005596060 podman[315280]: 2026-01-26 18:51:07.734242153 +0000 UTC m=+0.144519216 container died 5dc934a3c636901d9511c07f4267368c0e188cd3ffe3feb08ef1192f65a3e435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:51:07 np0005596060 systemd[1]: var-lib-containers-storage-overlay-7c65d9f797c41373a91b8d97b1b46e261e2b3cc9db60f3c68047af1b78381df0-merged.mount: Deactivated successfully.
Jan 26 13:51:07 np0005596060 podman[315280]: 2026-01-26 18:51:07.787499571 +0000 UTC m=+0.197776644 container remove 5dc934a3c636901d9511c07f4267368c0e188cd3ffe3feb08ef1192f65a3e435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_hofstadter, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 13:51:07 np0005596060 systemd[1]: libpod-conmon-5dc934a3c636901d9511c07f4267368c0e188cd3ffe3feb08ef1192f65a3e435.scope: Deactivated successfully.
Jan 26 13:51:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:07 np0005596060 podman[315322]: 2026-01-26 18:51:07.964144248 +0000 UTC m=+0.039650591 container create 70e330d32f0ea1df628e65865c86d5c48bd6adfb092b3215ef527403dd4b4c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:51:08 np0005596060 systemd[1]: Started libpod-conmon-70e330d32f0ea1df628e65865c86d5c48bd6adfb092b3215ef527403dd4b4c39.scope.
Jan 26 13:51:08 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:51:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf1cff87da106bd753a75469126181396ec27757b9441a58e1cf1cc2724a89a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf1cff87da106bd753a75469126181396ec27757b9441a58e1cf1cc2724a89a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf1cff87da106bd753a75469126181396ec27757b9441a58e1cf1cc2724a89a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:08 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf1cff87da106bd753a75469126181396ec27757b9441a58e1cf1cc2724a89a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:08 np0005596060 podman[315322]: 2026-01-26 18:51:07.947786529 +0000 UTC m=+0.023292892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:51:08 np0005596060 podman[315322]: 2026-01-26 18:51:08.046643835 +0000 UTC m=+0.122150198 container init 70e330d32f0ea1df628e65865c86d5c48bd6adfb092b3215ef527403dd4b4c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermat, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 13:51:08 np0005596060 podman[315322]: 2026-01-26 18:51:08.053305461 +0000 UTC m=+0.128811804 container start 70e330d32f0ea1df628e65865c86d5c48bd6adfb092b3215ef527403dd4b4c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermat, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 26 13:51:08 np0005596060 podman[315322]: 2026-01-26 18:51:08.05605323 +0000 UTC m=+0.131559603 container attach 70e330d32f0ea1df628e65865c86d5c48bd6adfb092b3215ef527403dd4b4c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:51:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:08.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]: {
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:    "1": [
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:        {
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "devices": [
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "/dev/loop3"
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            ],
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "lv_name": "ceph_lv0",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "lv_size": "7511998464",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "name": "ceph_lv0",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "tags": {
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.cluster_name": "ceph",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.crush_device_class": "",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.encrypted": "0",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.osd_id": "1",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.type": "block",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:                "ceph.vdo": "0"
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            },
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "type": "block",
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:            "vg_name": "ceph_vg0"
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:        }
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]:    ]
Jan 26 13:51:08 np0005596060 exciting_fermat[315339]: }
Jan 26 13:51:08 np0005596060 systemd[1]: libpod-70e330d32f0ea1df628e65865c86d5c48bd6adfb092b3215ef527403dd4b4c39.scope: Deactivated successfully.
Jan 26 13:51:08 np0005596060 podman[315322]: 2026-01-26 18:51:08.823718949 +0000 UTC m=+0.899225282 container died 70e330d32f0ea1df628e65865c86d5c48bd6adfb092b3215ef527403dd4b4c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermat, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:51:08 np0005596060 systemd[1]: var-lib-containers-storage-overlay-1cf1cff87da106bd753a75469126181396ec27757b9441a58e1cf1cc2724a89a-merged.mount: Deactivated successfully.
Jan 26 13:51:08 np0005596060 podman[315322]: 2026-01-26 18:51:08.879146761 +0000 UTC m=+0.954653104 container remove 70e330d32f0ea1df628e65865c86d5c48bd6adfb092b3215ef527403dd4b4c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_fermat, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 26 13:51:08 np0005596060 systemd[1]: libpod-conmon-70e330d32f0ea1df628e65865c86d5c48bd6adfb092b3215ef527403dd4b4c39.scope: Deactivated successfully.
Jan 26 13:51:08 np0005596060 nova_compute[247421]: 2026-01-26 18:51:08.992 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:09 np0005596060 podman[315501]: 2026-01-26 18:51:09.463975239 +0000 UTC m=+0.038954192 container create 1654d5ea5e287a956063bcf85e47e9349cd4e2b6e3da6bc6ce9dbda819c15212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:51:09 np0005596060 systemd[1]: Started libpod-conmon-1654d5ea5e287a956063bcf85e47e9349cd4e2b6e3da6bc6ce9dbda819c15212.scope.
Jan 26 13:51:09 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:51:09 np0005596060 podman[315501]: 2026-01-26 18:51:09.539729789 +0000 UTC m=+0.114708742 container init 1654d5ea5e287a956063bcf85e47e9349cd4e2b6e3da6bc6ce9dbda819c15212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 13:51:09 np0005596060 podman[315501]: 2026-01-26 18:51:09.446839812 +0000 UTC m=+0.021818795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:51:09 np0005596060 podman[315501]: 2026-01-26 18:51:09.552028646 +0000 UTC m=+0.127007599 container start 1654d5ea5e287a956063bcf85e47e9349cd4e2b6e3da6bc6ce9dbda819c15212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:51:09 np0005596060 mystifying_cray[315517]: 167 167
Jan 26 13:51:09 np0005596060 systemd[1]: libpod-1654d5ea5e287a956063bcf85e47e9349cd4e2b6e3da6bc6ce9dbda819c15212.scope: Deactivated successfully.
Jan 26 13:51:09 np0005596060 podman[315501]: 2026-01-26 18:51:09.556324343 +0000 UTC m=+0.131303326 container attach 1654d5ea5e287a956063bcf85e47e9349cd4e2b6e3da6bc6ce9dbda819c15212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 26 13:51:09 np0005596060 podman[315501]: 2026-01-26 18:51:09.556798985 +0000 UTC m=+0.131777938 container died 1654d5ea5e287a956063bcf85e47e9349cd4e2b6e3da6bc6ce9dbda819c15212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Jan 26 13:51:09 np0005596060 systemd[1]: var-lib-containers-storage-overlay-efab95796a934d17cde1036a2b53f07043d2d301abdfca45b7e9485460d5ded0-merged.mount: Deactivated successfully.
Jan 26 13:51:09 np0005596060 podman[315501]: 2026-01-26 18:51:09.596378762 +0000 UTC m=+0.171357725 container remove 1654d5ea5e287a956063bcf85e47e9349cd4e2b6e3da6bc6ce9dbda819c15212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_cray, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:51:09 np0005596060 systemd[1]: libpod-conmon-1654d5ea5e287a956063bcf85e47e9349cd4e2b6e3da6bc6ce9dbda819c15212.scope: Deactivated successfully.
Jan 26 13:51:09 np0005596060 podman[315543]: 2026-01-26 18:51:09.762240658 +0000 UTC m=+0.052279105 container create e762a3571aed6094c3b26a8350f3b013aae8b3e7add160d31e802a774b12d084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 26 13:51:09 np0005596060 systemd[1]: Started libpod-conmon-e762a3571aed6094c3b26a8350f3b013aae8b3e7add160d31e802a774b12d084.scope.
Jan 26 13:51:09 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:51:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5c9432b6f94c8a7be5dc90095a17041392ebcfe4d88c9cfef3723cb673ba62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5c9432b6f94c8a7be5dc90095a17041392ebcfe4d88c9cfef3723cb673ba62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5c9432b6f94c8a7be5dc90095a17041392ebcfe4d88c9cfef3723cb673ba62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:09 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5c9432b6f94c8a7be5dc90095a17041392ebcfe4d88c9cfef3723cb673ba62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:51:09 np0005596060 podman[315543]: 2026-01-26 18:51:09.737099841 +0000 UTC m=+0.027138328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:51:09 np0005596060 podman[315543]: 2026-01-26 18:51:09.841956107 +0000 UTC m=+0.131994574 container init e762a3571aed6094c3b26a8350f3b013aae8b3e7add160d31e802a774b12d084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 26 13:51:09 np0005596060 podman[315543]: 2026-01-26 18:51:09.85572407 +0000 UTC m=+0.145762517 container start e762a3571aed6094c3b26a8350f3b013aae8b3e7add160d31e802a774b12d084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:51:09 np0005596060 podman[315543]: 2026-01-26 18:51:09.858769456 +0000 UTC m=+0.148808103 container attach e762a3571aed6094c3b26a8350f3b013aae8b3e7add160d31e802a774b12d084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 13:51:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:09.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:10.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:10 np0005596060 nova_compute[247421]: 2026-01-26 18:51:10.637 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:10 np0005596060 loving_chandrasekhar[315559]: {
Jan 26 13:51:10 np0005596060 loving_chandrasekhar[315559]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:51:10 np0005596060 loving_chandrasekhar[315559]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:51:10 np0005596060 loving_chandrasekhar[315559]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:51:10 np0005596060 loving_chandrasekhar[315559]:        "osd_id": 1,
Jan 26 13:51:10 np0005596060 loving_chandrasekhar[315559]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:51:10 np0005596060 loving_chandrasekhar[315559]:        "type": "bluestore"
Jan 26 13:51:10 np0005596060 loving_chandrasekhar[315559]:    }
Jan 26 13:51:10 np0005596060 loving_chandrasekhar[315559]: }
Jan 26 13:51:10 np0005596060 systemd[1]: libpod-e762a3571aed6094c3b26a8350f3b013aae8b3e7add160d31e802a774b12d084.scope: Deactivated successfully.
Jan 26 13:51:10 np0005596060 podman[315580]: 2026-01-26 18:51:10.71842597 +0000 UTC m=+0.027088437 container died e762a3571aed6094c3b26a8350f3b013aae8b3e7add160d31e802a774b12d084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:51:10 np0005596060 systemd[1]: var-lib-containers-storage-overlay-4c5c9432b6f94c8a7be5dc90095a17041392ebcfe4d88c9cfef3723cb673ba62-merged.mount: Deactivated successfully.
Jan 26 13:51:10 np0005596060 podman[315580]: 2026-01-26 18:51:10.774466598 +0000 UTC m=+0.083129065 container remove e762a3571aed6094c3b26a8350f3b013aae8b3e7add160d31e802a774b12d084 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 26 13:51:10 np0005596060 systemd[1]: libpod-conmon-e762a3571aed6094c3b26a8350f3b013aae8b3e7add160d31e802a774b12d084.scope: Deactivated successfully.
Jan 26 13:51:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:51:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:51:10 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:51:10 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:51:10 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 8f6774cc-af60-4958-85a0-68336f316958 does not exist
Jan 26 13:51:10 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d7351eeb-b29a-4cf8-9a21-1206105b3eac does not exist
Jan 26 13:51:10 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev fed7db68-c13e-4efb-a03d-c840cb97f5e5 does not exist
Jan 26 13:51:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:51:11 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 21K writes, 76K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 21K writes, 7356 syncs, 2.98 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2417 writes, 7263 keys, 2417 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s#012Interval WAL: 2417 writes, 1055 syncs, 2.29 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 13:51:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:51:11 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:51:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:11 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:11 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:11 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:11.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:12.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:13 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:13 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:13 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:13.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:13 np0005596060 nova_compute[247421]: 2026-01-26 18:51:13.993 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:51:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:51:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:14.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:51:14.777 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:51:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:51:14.778 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:51:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:51:14.778 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:51:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:15 np0005596060 nova_compute[247421]: 2026-01-26 18:51:15.641 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:15 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:15 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:15 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:15.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:16.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:16 np0005596060 podman[315648]: 2026-01-26 18:51:16.794415459 +0000 UTC m=+0.059847854 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 13:51:16 np0005596060 podman[315649]: 2026-01-26 18:51:16.820408817 +0000 UTC m=+0.084651203 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 13:51:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:17 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:17 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:17 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:17.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:18.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:18 np0005596060 nova_compute[247421]: 2026-01-26 18:51:18.995 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:19 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:19 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:19 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:19.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:20.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:20 np0005596060 nova_compute[247421]: 2026-01-26 18:51:20.643 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:21 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:21 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:21 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:21.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:22.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:23 np0005596060 ceph-mgr[74563]: [devicehealth INFO root] Check health
Jan 26 13:51:23 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:23 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:23 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:23.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:23 np0005596060 nova_compute[247421]: 2026-01-26 18:51:23.996 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:24.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:25 np0005596060 nova_compute[247421]: 2026-01-26 18:51:25.646 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:25 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:25 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:25 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:25.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:26.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:27 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:27 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:27 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:27.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:51:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:28.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:51:28 np0005596060 nova_compute[247421]: 2026-01-26 18:51:28.998 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:29 np0005596060 nova_compute[247421]: 2026-01-26 18:51:29.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:51:29 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:29 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:29 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:29.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:30.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:30 np0005596060 nova_compute[247421]: 2026-01-26 18:51:30.648 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:31 np0005596060 nova_compute[247421]: 2026-01-26 18:51:31.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:51:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:31 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:31 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:31 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:31.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:32.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:33 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:33 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:33 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:33.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:34 np0005596060 nova_compute[247421]: 2026-01-26 18:51:34.001 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:34.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:34 np0005596060 nova_compute[247421]: 2026-01-26 18:51:34.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:51:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:35 np0005596060 nova_compute[247421]: 2026-01-26 18:51:35.652 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:35 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:35 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:35 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:35.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:36.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:36 np0005596060 nova_compute[247421]: 2026-01-26 18:51:36.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:51:36 np0005596060 nova_compute[247421]: 2026-01-26 18:51:36.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:51:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:37 np0005596060 nova_compute[247421]: 2026-01-26 18:51:37.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:51:37 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:37 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:37 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:37.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:38.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:39 np0005596060 nova_compute[247421]: 2026-01-26 18:51:39.002 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:39 np0005596060 nova_compute[247421]: 2026-01-26 18:51:39.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:51:39 np0005596060 nova_compute[247421]: 2026-01-26 18:51:39.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:51:39 np0005596060 nova_compute[247421]: 2026-01-26 18:51:39.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:51:39 np0005596060 nova_compute[247421]: 2026-01-26 18:51:39.666 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:51:39 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:39 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:39 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:39.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:40.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:40 np0005596060 nova_compute[247421]: 2026-01-26 18:51:40.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:51:40 np0005596060 nova_compute[247421]: 2026-01-26 18:51:40.654 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:40 np0005596060 nova_compute[247421]: 2026-01-26 18:51:40.717 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:51:40 np0005596060 nova_compute[247421]: 2026-01-26 18:51:40.718 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:51:40 np0005596060 nova_compute[247421]: 2026-01-26 18:51:40.718 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:51:40 np0005596060 nova_compute[247421]: 2026-01-26 18:51:40.718 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:51:40 np0005596060 nova_compute[247421]: 2026-01-26 18:51:40.719 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:51:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:51:41 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1419177069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:51:41 np0005596060 nova_compute[247421]: 2026-01-26 18:51:41.172 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:51:41 np0005596060 nova_compute[247421]: 2026-01-26 18:51:41.336 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:51:41 np0005596060 nova_compute[247421]: 2026-01-26 18:51:41.337 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4548MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:51:41 np0005596060 nova_compute[247421]: 2026-01-26 18:51:41.337 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:51:41 np0005596060 nova_compute[247421]: 2026-01-26 18:51:41.337 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:51:41 np0005596060 nova_compute[247421]: 2026-01-26 18:51:41.623 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:51:41 np0005596060 nova_compute[247421]: 2026-01-26 18:51:41.623 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:51:41 np0005596060 nova_compute[247421]: 2026-01-26 18:51:41.638 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:51:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:41 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:41 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:41 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:41.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:51:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707577278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:51:42 np0005596060 nova_compute[247421]: 2026-01-26 18:51:42.059 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:51:42 np0005596060 nova_compute[247421]: 2026-01-26 18:51:42.065 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:51:42 np0005596060 nova_compute[247421]: 2026-01-26 18:51:42.115 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:51:42 np0005596060 nova_compute[247421]: 2026-01-26 18:51:42.117 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:51:42 np0005596060 nova_compute[247421]: 2026-01-26 18:51:42.117 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:51:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:42.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:43 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:43 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:43 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:43.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:44 np0005596060 nova_compute[247421]: 2026-01-26 18:51:44.004 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:44 np0005596060 nova_compute[247421]: 2026-01-26 18:51:44.118 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:51:44
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', '.mgr', 'vms', 'backups', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta']
Jan 26 13:51:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:51:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 26 13:51:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:44.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:51:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:45 np0005596060 nova_compute[247421]: 2026-01-26 18:51:45.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:51:45 np0005596060 nova_compute[247421]: 2026-01-26 18:51:45.657 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:45 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:45 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:45 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:45.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:46.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:47 np0005596060 podman[315828]: 2026-01-26 18:51:47.600287694 +0000 UTC m=+0.054381697 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 26 13:51:47 np0005596060 podman[315829]: 2026-01-26 18:51:47.642954818 +0000 UTC m=+0.092763274 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 13:51:47 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:47 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:47 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:47.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:48.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:49 np0005596060 nova_compute[247421]: 2026-01-26 18:51:49.005 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:49 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:49 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:49 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:49.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:50.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:50 np0005596060 nova_compute[247421]: 2026-01-26 18:51:50.661 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:51 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:51 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:51 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:51.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:52.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:53 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:53 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:53 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:53.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:54 np0005596060 nova_compute[247421]: 2026-01-26 18:51:54.008 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:54.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:55 np0005596060 nova_compute[247421]: 2026-01-26 18:51:55.663 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:55 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:55 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:55 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:55.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:56.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:51:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:51:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:57 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:57 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:57 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:57.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:51:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:51:58.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:51:59 np0005596060 nova_compute[247421]: 2026-01-26 18:51:59.009 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:51:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:51:59 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:51:59 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:51:59 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:51:59.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:00.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:00 np0005596060 nova_compute[247421]: 2026-01-26 18:52:00.665 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:01 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:01 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:01 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:01.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:02.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:03 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:03 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:03 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:03.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:04 np0005596060 nova_compute[247421]: 2026-01-26 18:52:04.012 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:52:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:52:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:04.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:05 np0005596060 nova_compute[247421]: 2026-01-26 18:52:05.670 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:05 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:05 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:05 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:05.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:06.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:07 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:07 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:07 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:07.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:08.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:09 np0005596060 nova_compute[247421]: 2026-01-26 18:52:09.013 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:09 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:09 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:09 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:09.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:10 np0005596060 nova_compute[247421]: 2026-01-26 18:52:10.705 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:12.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:52:12 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev dbd79eb2-5dba-4962-af48-d7fbca1d0228 does not exist
Jan 26 13:52:12 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev f4acf9c9-c895-4c54-bf2b-0f8611dcaf8e does not exist
Jan 26 13:52:12 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev bce5baa0-a9d2-4a28-af1b-8a3f79d0916d does not exist
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:52:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:52:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:12.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:12 np0005596060 podman[316231]: 2026-01-26 18:52:12.971249784 +0000 UTC m=+0.053135966 container create 7ff75fbecdc33625ef1e2a092d5f0f6e9290e19647e393b9fe525d0f337571d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:52:13 np0005596060 systemd[1]: Started libpod-conmon-7ff75fbecdc33625ef1e2a092d5f0f6e9290e19647e393b9fe525d0f337571d8.scope.
Jan 26 13:52:13 np0005596060 podman[316231]: 2026-01-26 18:52:12.946378314 +0000 UTC m=+0.028264556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:52:13 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:52:13 np0005596060 podman[316231]: 2026-01-26 18:52:13.067340461 +0000 UTC m=+0.149226733 container init 7ff75fbecdc33625ef1e2a092d5f0f6e9290e19647e393b9fe525d0f337571d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:52:13 np0005596060 podman[316231]: 2026-01-26 18:52:13.078533501 +0000 UTC m=+0.160419683 container start 7ff75fbecdc33625ef1e2a092d5f0f6e9290e19647e393b9fe525d0f337571d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:52:13 np0005596060 podman[316231]: 2026-01-26 18:52:13.083849733 +0000 UTC m=+0.165736005 container attach 7ff75fbecdc33625ef1e2a092d5f0f6e9290e19647e393b9fe525d0f337571d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 13:52:13 np0005596060 focused_dirac[316247]: 167 167
Jan 26 13:52:13 np0005596060 systemd[1]: libpod-7ff75fbecdc33625ef1e2a092d5f0f6e9290e19647e393b9fe525d0f337571d8.scope: Deactivated successfully.
Jan 26 13:52:13 np0005596060 conmon[316247]: conmon 7ff75fbecdc33625ef1e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ff75fbecdc33625ef1e2a092d5f0f6e9290e19647e393b9fe525d0f337571d8.scope/container/memory.events
Jan 26 13:52:13 np0005596060 podman[316231]: 2026-01-26 18:52:13.088264153 +0000 UTC m=+0.170150355 container died 7ff75fbecdc33625ef1e2a092d5f0f6e9290e19647e393b9fe525d0f337571d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:52:13 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b619a43cb91f32a25b750c52e96e257e4829999387c96a6192db95913fa777af-merged.mount: Deactivated successfully.
Jan 26 13:52:13 np0005596060 podman[316231]: 2026-01-26 18:52:13.141677656 +0000 UTC m=+0.223563838 container remove 7ff75fbecdc33625ef1e2a092d5f0f6e9290e19647e393b9fe525d0f337571d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:52:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:13 np0005596060 systemd[1]: libpod-conmon-7ff75fbecdc33625ef1e2a092d5f0f6e9290e19647e393b9fe525d0f337571d8.scope: Deactivated successfully.
Jan 26 13:52:13 np0005596060 podman[316271]: 2026-01-26 18:52:13.336890155 +0000 UTC m=+0.050112861 container create 2d673e6412d74480aba4a4d968ace20da402e3169e262e1535a6eebc426e980a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:52:13 np0005596060 systemd[1]: Started libpod-conmon-2d673e6412d74480aba4a4d968ace20da402e3169e262e1535a6eebc426e980a.scope.
Jan 26 13:52:13 np0005596060 podman[316271]: 2026-01-26 18:52:13.318281861 +0000 UTC m=+0.031504597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:52:13 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:52:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318e7c0989cbbf032eb62857b296173d894495e89d69b071b12f331becd92933/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318e7c0989cbbf032eb62857b296173d894495e89d69b071b12f331becd92933/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318e7c0989cbbf032eb62857b296173d894495e89d69b071b12f331becd92933/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318e7c0989cbbf032eb62857b296173d894495e89d69b071b12f331becd92933/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:13 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318e7c0989cbbf032eb62857b296173d894495e89d69b071b12f331becd92933/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:13 np0005596060 podman[316271]: 2026-01-26 18:52:13.443196497 +0000 UTC m=+0.156419223 container init 2d673e6412d74480aba4a4d968ace20da402e3169e262e1535a6eebc426e980a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feistel, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:52:13 np0005596060 podman[316271]: 2026-01-26 18:52:13.453935555 +0000 UTC m=+0.167158261 container start 2d673e6412d74480aba4a4d968ace20da402e3169e262e1535a6eebc426e980a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feistel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:52:13 np0005596060 podman[316271]: 2026-01-26 18:52:13.457665418 +0000 UTC m=+0.170888144 container attach 2d673e6412d74480aba4a4d968ace20da402e3169e262e1535a6eebc426e980a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feistel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 13:52:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:52:13 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:52:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:14.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:14 np0005596060 nova_compute[247421]: 2026-01-26 18:52:14.087 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:52:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:52:14 np0005596060 quizzical_feistel[316287]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:52:14 np0005596060 quizzical_feistel[316287]: --> relative data size: 1.0
Jan 26 13:52:14 np0005596060 quizzical_feistel[316287]: --> All data devices are unavailable
Jan 26 13:52:14 np0005596060 systemd[1]: libpod-2d673e6412d74480aba4a4d968ace20da402e3169e262e1535a6eebc426e980a.scope: Deactivated successfully.
Jan 26 13:52:14 np0005596060 podman[316271]: 2026-01-26 18:52:14.289853715 +0000 UTC m=+1.003076421 container died 2d673e6412d74480aba4a4d968ace20da402e3169e262e1535a6eebc426e980a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feistel, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:52:14 np0005596060 systemd[1]: var-lib-containers-storage-overlay-318e7c0989cbbf032eb62857b296173d894495e89d69b071b12f331becd92933-merged.mount: Deactivated successfully.
Jan 26 13:52:14 np0005596060 podman[316271]: 2026-01-26 18:52:14.34776334 +0000 UTC m=+1.060986056 container remove 2d673e6412d74480aba4a4d968ace20da402e3169e262e1535a6eebc426e980a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:52:14 np0005596060 systemd[1]: libpod-conmon-2d673e6412d74480aba4a4d968ace20da402e3169e262e1535a6eebc426e980a.scope: Deactivated successfully.
Jan 26 13:52:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:14.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:52:14.778 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:52:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:52:14.779 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:52:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:52:14.779 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:52:14 np0005596060 podman[316460]: 2026-01-26 18:52:14.914198869 +0000 UTC m=+0.042668936 container create e06ef25d85a0886437a83bf9eb13239ca2870032829b11c4cd021d54e2e048c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 13:52:14 np0005596060 systemd[1]: Started libpod-conmon-e06ef25d85a0886437a83bf9eb13239ca2870032829b11c4cd021d54e2e048c2.scope.
Jan 26 13:52:14 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:52:14 np0005596060 podman[316460]: 2026-01-26 18:52:14.987135418 +0000 UTC m=+0.115605485 container init e06ef25d85a0886437a83bf9eb13239ca2870032829b11c4cd021d54e2e048c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 26 13:52:14 np0005596060 podman[316460]: 2026-01-26 18:52:14.893233716 +0000 UTC m=+0.021703823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:52:14 np0005596060 podman[316460]: 2026-01-26 18:52:14.994951013 +0000 UTC m=+0.123421070 container start e06ef25d85a0886437a83bf9eb13239ca2870032829b11c4cd021d54e2e048c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:52:14 np0005596060 podman[316460]: 2026-01-26 18:52:14.998201634 +0000 UTC m=+0.126671711 container attach e06ef25d85a0886437a83bf9eb13239ca2870032829b11c4cd021d54e2e048c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:52:14 np0005596060 bold_curran[316477]: 167 167
Jan 26 13:52:15 np0005596060 systemd[1]: libpod-e06ef25d85a0886437a83bf9eb13239ca2870032829b11c4cd021d54e2e048c2.scope: Deactivated successfully.
Jan 26 13:52:15 np0005596060 podman[316460]: 2026-01-26 18:52:15.016104021 +0000 UTC m=+0.144574078 container died e06ef25d85a0886437a83bf9eb13239ca2870032829b11c4cd021d54e2e048c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:52:15 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c0e3c5f26fada5483e62896f3cdc4f80991a2dd63c4a84bd037817356c10341e-merged.mount: Deactivated successfully.
Jan 26 13:52:15 np0005596060 podman[316460]: 2026-01-26 18:52:15.049960335 +0000 UTC m=+0.178430392 container remove e06ef25d85a0886437a83bf9eb13239ca2870032829b11c4cd021d54e2e048c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:52:15 np0005596060 systemd[1]: libpod-conmon-e06ef25d85a0886437a83bf9eb13239ca2870032829b11c4cd021d54e2e048c2.scope: Deactivated successfully.
Jan 26 13:52:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:15 np0005596060 podman[316501]: 2026-01-26 18:52:15.208138441 +0000 UTC m=+0.037507607 container create 90ad955d2b713f782ab422d9ae4ad55c557aa735194137a56176ba6d7a398974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:52:15 np0005596060 systemd[1]: Started libpod-conmon-90ad955d2b713f782ab422d9ae4ad55c557aa735194137a56176ba6d7a398974.scope.
Jan 26 13:52:15 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:52:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6076b1ed116e7b9e2b9735995bfb42fb9ee90b56de683aad948aaae9945105b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6076b1ed116e7b9e2b9735995bfb42fb9ee90b56de683aad948aaae9945105b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:15 np0005596060 podman[316501]: 2026-01-26 18:52:15.192755987 +0000 UTC m=+0.022125153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:52:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6076b1ed116e7b9e2b9735995bfb42fb9ee90b56de683aad948aaae9945105b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:15 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6076b1ed116e7b9e2b9735995bfb42fb9ee90b56de683aad948aaae9945105b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:15 np0005596060 podman[316501]: 2026-01-26 18:52:15.301030058 +0000 UTC m=+0.130399204 container init 90ad955d2b713f782ab422d9ae4ad55c557aa735194137a56176ba6d7a398974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:52:15 np0005596060 podman[316501]: 2026-01-26 18:52:15.309325645 +0000 UTC m=+0.138694791 container start 90ad955d2b713f782ab422d9ae4ad55c557aa735194137a56176ba6d7a398974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cerf, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 26 13:52:15 np0005596060 podman[316501]: 2026-01-26 18:52:15.312377301 +0000 UTC m=+0.141746467 container attach 90ad955d2b713f782ab422d9ae4ad55c557aa735194137a56176ba6d7a398974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 26 13:52:15 np0005596060 nova_compute[247421]: 2026-01-26 18:52:15.707 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:16.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]: {
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:    "1": [
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:        {
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "devices": [
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "/dev/loop3"
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            ],
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "lv_name": "ceph_lv0",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "lv_size": "7511998464",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "name": "ceph_lv0",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "tags": {
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.cluster_name": "ceph",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.crush_device_class": "",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.encrypted": "0",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.osd_id": "1",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.type": "block",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:                "ceph.vdo": "0"
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            },
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "type": "block",
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:            "vg_name": "ceph_vg0"
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:        }
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]:    ]
Jan 26 13:52:16 np0005596060 nervous_cerf[316518]: }
Jan 26 13:52:16 np0005596060 systemd[1]: libpod-90ad955d2b713f782ab422d9ae4ad55c557aa735194137a56176ba6d7a398974.scope: Deactivated successfully.
Jan 26 13:52:16 np0005596060 podman[316501]: 2026-01-26 18:52:16.102782987 +0000 UTC m=+0.932152143 container died 90ad955d2b713f782ab422d9ae4ad55c557aa735194137a56176ba6d7a398974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 13:52:16 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6076b1ed116e7b9e2b9735995bfb42fb9ee90b56de683aad948aaae9945105b6-merged.mount: Deactivated successfully.
Jan 26 13:52:16 np0005596060 podman[316501]: 2026-01-26 18:52:16.159284696 +0000 UTC m=+0.988653852 container remove 90ad955d2b713f782ab422d9ae4ad55c557aa735194137a56176ba6d7a398974 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_cerf, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 26 13:52:16 np0005596060 systemd[1]: libpod-conmon-90ad955d2b713f782ab422d9ae4ad55c557aa735194137a56176ba6d7a398974.scope: Deactivated successfully.
Jan 26 13:52:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:16.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:16 np0005596060 podman[316678]: 2026-01-26 18:52:16.800906841 +0000 UTC m=+0.038989874 container create 81ceedc382a51f530335cfad71efc02cdfa54fdd664707759df4f2c6ae11f1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 26 13:52:16 np0005596060 systemd[1]: Started libpod-conmon-81ceedc382a51f530335cfad71efc02cdfa54fdd664707759df4f2c6ae11f1fe.scope.
Jan 26 13:52:16 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:52:16 np0005596060 podman[316678]: 2026-01-26 18:52:16.784761708 +0000 UTC m=+0.022844761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:52:16 np0005596060 podman[316678]: 2026-01-26 18:52:16.887490061 +0000 UTC m=+0.125573094 container init 81ceedc382a51f530335cfad71efc02cdfa54fdd664707759df4f2c6ae11f1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 26 13:52:16 np0005596060 podman[316678]: 2026-01-26 18:52:16.895608833 +0000 UTC m=+0.133691876 container start 81ceedc382a51f530335cfad71efc02cdfa54fdd664707759df4f2c6ae11f1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 26 13:52:16 np0005596060 podman[316678]: 2026-01-26 18:52:16.899389238 +0000 UTC m=+0.137472271 container attach 81ceedc382a51f530335cfad71efc02cdfa54fdd664707759df4f2c6ae11f1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:52:16 np0005596060 happy_einstein[316695]: 167 167
Jan 26 13:52:16 np0005596060 systemd[1]: libpod-81ceedc382a51f530335cfad71efc02cdfa54fdd664707759df4f2c6ae11f1fe.scope: Deactivated successfully.
Jan 26 13:52:16 np0005596060 podman[316678]: 2026-01-26 18:52:16.901031099 +0000 UTC m=+0.139114142 container died 81ceedc382a51f530335cfad71efc02cdfa54fdd664707759df4f2c6ae11f1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:52:16 np0005596060 systemd[1]: var-lib-containers-storage-overlay-930e8a62c7edc9a859f910bef0d72119565e5a7584899678a46f7e3064232de3-merged.mount: Deactivated successfully.
Jan 26 13:52:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:16 np0005596060 podman[316678]: 2026-01-26 18:52:16.936641747 +0000 UTC m=+0.174724790 container remove 81ceedc382a51f530335cfad71efc02cdfa54fdd664707759df4f2c6ae11f1fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_einstein, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:52:16 np0005596060 systemd[1]: libpod-conmon-81ceedc382a51f530335cfad71efc02cdfa54fdd664707759df4f2c6ae11f1fe.scope: Deactivated successfully.
Jan 26 13:52:17 np0005596060 podman[316718]: 2026-01-26 18:52:17.106604186 +0000 UTC m=+0.040792938 container create f69ff7956da82da20128a5bc7c50b89b489ce97a7bdce6a403a943e38fcb3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 26 13:52:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:17 np0005596060 systemd[1]: Started libpod-conmon-f69ff7956da82da20128a5bc7c50b89b489ce97a7bdce6a403a943e38fcb3619.scope.
Jan 26 13:52:17 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:52:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef7267c3751fd0a16b0c776a98616ee2da790979594a1a184502a44c94fe621/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef7267c3751fd0a16b0c776a98616ee2da790979594a1a184502a44c94fe621/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef7267c3751fd0a16b0c776a98616ee2da790979594a1a184502a44c94fe621/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:17 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef7267c3751fd0a16b0c776a98616ee2da790979594a1a184502a44c94fe621/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:52:17 np0005596060 podman[316718]: 2026-01-26 18:52:17.089872699 +0000 UTC m=+0.024061451 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:52:17 np0005596060 podman[316718]: 2026-01-26 18:52:17.187804752 +0000 UTC m=+0.121993514 container init f69ff7956da82da20128a5bc7c50b89b489ce97a7bdce6a403a943e38fcb3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 26 13:52:17 np0005596060 podman[316718]: 2026-01-26 18:52:17.194031067 +0000 UTC m=+0.128219809 container start f69ff7956da82da20128a5bc7c50b89b489ce97a7bdce6a403a943e38fcb3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:52:17 np0005596060 podman[316718]: 2026-01-26 18:52:17.197083813 +0000 UTC m=+0.131272545 container attach f69ff7956da82da20128a5bc7c50b89b489ce97a7bdce6a403a943e38fcb3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:52:17 np0005596060 podman[316739]: 2026-01-26 18:52:17.821871357 +0000 UTC m=+0.085402641 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 13:52:17 np0005596060 podman[316740]: 2026-01-26 18:52:17.825282192 +0000 UTC m=+0.088391666 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:52:17 np0005596060 jovial_sutherland[316733]: {
Jan 26 13:52:17 np0005596060 jovial_sutherland[316733]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:52:17 np0005596060 jovial_sutherland[316733]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:52:17 np0005596060 jovial_sutherland[316733]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:52:17 np0005596060 jovial_sutherland[316733]:        "osd_id": 1,
Jan 26 13:52:17 np0005596060 jovial_sutherland[316733]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:52:17 np0005596060 jovial_sutherland[316733]:        "type": "bluestore"
Jan 26 13:52:17 np0005596060 jovial_sutherland[316733]:    }
Jan 26 13:52:17 np0005596060 jovial_sutherland[316733]: }
Jan 26 13:52:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:18.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:18 np0005596060 systemd[1]: libpod-f69ff7956da82da20128a5bc7c50b89b489ce97a7bdce6a403a943e38fcb3619.scope: Deactivated successfully.
Jan 26 13:52:18 np0005596060 podman[316718]: 2026-01-26 18:52:18.040852989 +0000 UTC m=+0.975041721 container died f69ff7956da82da20128a5bc7c50b89b489ce97a7bdce6a403a943e38fcb3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 26 13:52:18 np0005596060 systemd[1]: var-lib-containers-storage-overlay-eef7267c3751fd0a16b0c776a98616ee2da790979594a1a184502a44c94fe621-merged.mount: Deactivated successfully.
Jan 26 13:52:18 np0005596060 podman[316718]: 2026-01-26 18:52:18.097419841 +0000 UTC m=+1.031608563 container remove f69ff7956da82da20128a5bc7c50b89b489ce97a7bdce6a403a943e38fcb3619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 26 13:52:18 np0005596060 systemd[1]: libpod-conmon-f69ff7956da82da20128a5bc7c50b89b489ce97a7bdce6a403a943e38fcb3619.scope: Deactivated successfully.
Jan 26 13:52:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:52:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:52:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:52:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:52:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 30132943-7598-4bb5-a02d-355991a3a6f9 does not exist
Jan 26 13:52:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 29687d3c-f2ec-4d00-8161-89b796ddf02d does not exist
Jan 26 13:52:18 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d1a8ff5e-95ce-434a-94ce-d127860eae11 does not exist
Jan 26 13:52:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:18.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:19 np0005596060 nova_compute[247421]: 2026-01-26 18:52:19.088 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:52:19 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:52:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:20.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:20 np0005596060 nova_compute[247421]: 2026-01-26 18:52:20.709 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:22.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:22.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:24.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:24 np0005596060 nova_compute[247421]: 2026-01-26 18:52:24.090 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:24.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:25 np0005596060 nova_compute[247421]: 2026-01-26 18:52:25.742 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:26.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:26.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:28.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:28.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:29 np0005596060 nova_compute[247421]: 2026-01-26 18:52:29.092 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:30.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:30 np0005596060 nova_compute[247421]: 2026-01-26 18:52:30.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:30 np0005596060 nova_compute[247421]: 2026-01-26 18:52:30.744 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:31 np0005596060 nova_compute[247421]: 2026-01-26 18:52:31.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:32.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:32.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:34.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:34 np0005596060 nova_compute[247421]: 2026-01-26 18:52:34.094 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:34.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:35 np0005596060 nova_compute[247421]: 2026-01-26 18:52:35.748 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:36.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:36.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:36 np0005596060 nova_compute[247421]: 2026-01-26 18:52:36.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:37 np0005596060 nova_compute[247421]: 2026-01-26 18:52:37.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:37 np0005596060 nova_compute[247421]: 2026-01-26 18:52:37.650 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:52:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:38.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:38.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:38 np0005596060 nova_compute[247421]: 2026-01-26 18:52:38.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:39 np0005596060 nova_compute[247421]: 2026-01-26 18:52:39.096 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:40.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:40.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:40 np0005596060 nova_compute[247421]: 2026-01-26 18:52:40.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:40 np0005596060 nova_compute[247421]: 2026-01-26 18:52:40.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:52:40 np0005596060 nova_compute[247421]: 2026-01-26 18:52:40.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:52:40 np0005596060 nova_compute[247421]: 2026-01-26 18:52:40.664 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:52:40 np0005596060 nova_compute[247421]: 2026-01-26 18:52:40.750 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:41 np0005596060 nova_compute[247421]: 2026-01-26 18:52:41.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:41 np0005596060 nova_compute[247421]: 2026-01-26 18:52:41.679 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:52:41 np0005596060 nova_compute[247421]: 2026-01-26 18:52:41.680 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:52:41 np0005596060 nova_compute[247421]: 2026-01-26 18:52:41.680 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:52:41 np0005596060 nova_compute[247421]: 2026-01-26 18:52:41.680 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:52:41 np0005596060 nova_compute[247421]: 2026-01-26 18:52:41.680 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:52:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:42.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:52:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3601957237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.102 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.256 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.257 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4557MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.257 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.257 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.366 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.366 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.386 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:52:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:42.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:42 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:52:42 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/752718009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.840 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.847 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.862 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.864 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:52:42 np0005596060 nova_compute[247421]: 2026-01-26 18:52:42.864 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:52:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:44.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:44 np0005596060 nova_compute[247421]: 2026-01-26 18:52:44.098 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:52:44
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'images', 'backups', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', '.rgw.root']
Jan 26 13:52:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:52:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:44.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:52:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:45 np0005596060 nova_compute[247421]: 2026-01-26 18:52:45.788 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:45 np0005596060 nova_compute[247421]: 2026-01-26 18:52:45.865 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:46.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:46.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:47 np0005596060 nova_compute[247421]: 2026-01-26 18:52:47.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:48.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:48 np0005596060 podman[316988]: 2026-01-26 18:52:48.097243544 +0000 UTC m=+0.079772751 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 13:52:48 np0005596060 podman[316989]: 2026-01-26 18:52:48.173022754 +0000 UTC m=+0.155442788 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 13:52:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:48.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:49 np0005596060 nova_compute[247421]: 2026-01-26 18:52:49.100 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:50.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:50.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:50 np0005596060 nova_compute[247421]: 2026-01-26 18:52:50.647 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:50 np0005596060 nova_compute[247421]: 2026-01-26 18:52:50.791 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:52.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:52.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 26 13:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 26 13:52:53 np0005596060 radosgw[92919]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 26 13:52:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:52:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:54.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:52:54 np0005596060 nova_compute[247421]: 2026-01-26 18:52:54.103 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:54.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:54 np0005596060 nova_compute[247421]: 2026-01-26 18:52:54.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:52:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 41 MiB data, 412 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:52:55 np0005596060 nova_compute[247421]: 2026-01-26 18:52:55.794 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:56.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 26 13:52:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:56.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 26 13:52:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:52:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 0 B/s wr, 92 op/s
Jan 26 13:52:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:52:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:52:58.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:52:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:52:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:52:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:52:58.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:52:59 np0005596060 nova_compute[247421]: 2026-01-26 18:52:59.104 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:52:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 26 13:53:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:00.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:00.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:00 np0005596060 nova_compute[247421]: 2026-01-26 18:53:00.849 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 26 13:53:01 np0005596060 nova_compute[247421]: 2026-01-26 18:53:01.264 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:02.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:02.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 26 13:53:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:04.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:04 np0005596060 nova_compute[247421]: 2026-01-26 18:53:04.107 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:53:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:53:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:04.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 26 13:53:05 np0005596060 nova_compute[247421]: 2026-01-26 18:53:05.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:05 np0005596060 nova_compute[247421]: 2026-01-26 18:53:05.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 26 13:53:05 np0005596060 nova_compute[247421]: 2026-01-26 18:53:05.891 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:06.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:06.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 94 KiB/s rd, 0 B/s wr, 157 op/s
Jan 26 13:53:07 np0005596060 nova_compute[247421]: 2026-01-26 18:53:07.986 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 26 13:53:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:08.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:08.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:09 np0005596060 nova_compute[247421]: 2026-01-26 18:53:09.109 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 0 B/s wr, 65 op/s
Jan 26 13:53:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:10.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:10.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:10 np0005596060 nova_compute[247421]: 2026-01-26 18:53:10.945 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:12.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:12.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:14.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:14 np0005596060 nova_compute[247421]: 2026-01-26 18:53:14.112 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:53:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:53:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:14.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:53:14.779 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:53:14.779 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:53:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:53:14.780 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:53:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:15 np0005596060 nova_compute[247421]: 2026-01-26 18:53:15.947 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:16.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:16.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:18.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:18 np0005596060 podman[317127]: 2026-01-26 18:53:18.780824329 +0000 UTC m=+0.049328762 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 13:53:18 np0005596060 podman[317133]: 2026-01-26 18:53:18.852984379 +0000 UTC m=+0.119276936 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 13:53:19 np0005596060 nova_compute[247421]: 2026-01-26 18:53:19.114 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:20.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:20.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 26 13:53:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:53:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 26 13:53:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:53:20 np0005596060 nova_compute[247421]: 2026-01-26 18:53:20.949 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:53:21 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 78c159aa-73db-484f-9da7-40def51f054a does not exist
Jan 26 13:53:21 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 3ae93856-e2ec-4948-9b52-f2aefc1784c6 does not exist
Jan 26 13:53:21 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev d8ecae81-cb2b-4487-ac59-7f40d2b12743 does not exist
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 26 13:53:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:21 np0005596060 podman[317444]: 2026-01-26 18:53:21.947343715 +0000 UTC m=+0.041760792 container create 179e000bdaf1945fc0903bd2737aede5e764d8d1602fb7b53ebee07ddc53501f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ganguly, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 13:53:21 np0005596060 systemd[1]: Started libpod-conmon-179e000bdaf1945fc0903bd2737aede5e764d8d1602fb7b53ebee07ddc53501f.scope.
Jan 26 13:53:22 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:53:22 np0005596060 podman[317444]: 2026-01-26 18:53:21.929639734 +0000 UTC m=+0.024056841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:53:22 np0005596060 podman[317444]: 2026-01-26 18:53:22.030352366 +0000 UTC m=+0.124769463 container init 179e000bdaf1945fc0903bd2737aede5e764d8d1602fb7b53ebee07ddc53501f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 13:53:22 np0005596060 podman[317444]: 2026-01-26 18:53:22.037159256 +0000 UTC m=+0.131576323 container start 179e000bdaf1945fc0903bd2737aede5e764d8d1602fb7b53ebee07ddc53501f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 13:53:22 np0005596060 podman[317444]: 2026-01-26 18:53:22.040282214 +0000 UTC m=+0.134699321 container attach 179e000bdaf1945fc0903bd2737aede5e764d8d1602fb7b53ebee07ddc53501f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 13:53:22 np0005596060 nervous_ganguly[317460]: 167 167
Jan 26 13:53:22 np0005596060 systemd[1]: libpod-179e000bdaf1945fc0903bd2737aede5e764d8d1602fb7b53ebee07ddc53501f.scope: Deactivated successfully.
Jan 26 13:53:22 np0005596060 podman[317444]: 2026-01-26 18:53:22.044346575 +0000 UTC m=+0.138763652 container died 179e000bdaf1945fc0903bd2737aede5e764d8d1602fb7b53ebee07ddc53501f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 13:53:22 np0005596060 systemd[1]: var-lib-containers-storage-overlay-46802d4c29b792e48c69ec331341f0d6fea323140bf37b46786eee618df1d60b-merged.mount: Deactivated successfully.
Jan 26 13:53:22 np0005596060 podman[317444]: 2026-01-26 18:53:22.082201219 +0000 UTC m=+0.176618296 container remove 179e000bdaf1945fc0903bd2737aede5e764d8d1602fb7b53ebee07ddc53501f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 26 13:53:22 np0005596060 systemd[1]: libpod-conmon-179e000bdaf1945fc0903bd2737aede5e764d8d1602fb7b53ebee07ddc53501f.scope: Deactivated successfully.
Jan 26 13:53:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:22.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:22 np0005596060 podman[317485]: 2026-01-26 18:53:22.250005424 +0000 UTC m=+0.041763563 container create 3c011e42755698b157cb1a88a5f78ceabbecb19a03983e1e9749bb8ec3a5db5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 13:53:22 np0005596060 systemd[1]: Started libpod-conmon-3c011e42755698b157cb1a88a5f78ceabbecb19a03983e1e9749bb8ec3a5db5d.scope.
Jan 26 13:53:22 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:53:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64ece137dfbb978860f104177fbafd8b6d623fdd730394d3471355dd4103fcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64ece137dfbb978860f104177fbafd8b6d623fdd730394d3471355dd4103fcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64ece137dfbb978860f104177fbafd8b6d623fdd730394d3471355dd4103fcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64ece137dfbb978860f104177fbafd8b6d623fdd730394d3471355dd4103fcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:22 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e64ece137dfbb978860f104177fbafd8b6d623fdd730394d3471355dd4103fcd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:22 np0005596060 podman[317485]: 2026-01-26 18:53:22.326281637 +0000 UTC m=+0.118039796 container init 3c011e42755698b157cb1a88a5f78ceabbecb19a03983e1e9749bb8ec3a5db5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 26 13:53:22 np0005596060 podman[317485]: 2026-01-26 18:53:22.232759774 +0000 UTC m=+0.024517943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:53:22 np0005596060 podman[317485]: 2026-01-26 18:53:22.334062571 +0000 UTC m=+0.125820710 container start 3c011e42755698b157cb1a88a5f78ceabbecb19a03983e1e9749bb8ec3a5db5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:53:22 np0005596060 podman[317485]: 2026-01-26 18:53:22.338104101 +0000 UTC m=+0.129862260 container attach 3c011e42755698b157cb1a88a5f78ceabbecb19a03983e1e9749bb8ec3a5db5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:53:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:22.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:22 np0005596060 nova_compute[247421]: 2026-01-26 18:53:22.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:22 np0005596060 nova_compute[247421]: 2026-01-26 18:53:22.653 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 26 13:53:23 np0005596060 relaxed_archimedes[317502]: --> passed data devices: 0 physical, 1 LVM
Jan 26 13:53:23 np0005596060 relaxed_archimedes[317502]: --> relative data size: 1.0
Jan 26 13:53:23 np0005596060 relaxed_archimedes[317502]: --> All data devices are unavailable
Jan 26 13:53:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:23 np0005596060 systemd[1]: libpod-3c011e42755698b157cb1a88a5f78ceabbecb19a03983e1e9749bb8ec3a5db5d.scope: Deactivated successfully.
Jan 26 13:53:23 np0005596060 podman[317485]: 2026-01-26 18:53:23.222569164 +0000 UTC m=+1.014327323 container died 3c011e42755698b157cb1a88a5f78ceabbecb19a03983e1e9749bb8ec3a5db5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:53:23 np0005596060 systemd[1]: var-lib-containers-storage-overlay-e64ece137dfbb978860f104177fbafd8b6d623fdd730394d3471355dd4103fcd-merged.mount: Deactivated successfully.
Jan 26 13:53:23 np0005596060 podman[317485]: 2026-01-26 18:53:23.281010661 +0000 UTC m=+1.072768810 container remove 3c011e42755698b157cb1a88a5f78ceabbecb19a03983e1e9749bb8ec3a5db5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_archimedes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 13:53:23 np0005596060 systemd[1]: libpod-conmon-3c011e42755698b157cb1a88a5f78ceabbecb19a03983e1e9749bb8ec3a5db5d.scope: Deactivated successfully.
Jan 26 13:53:23 np0005596060 podman[317669]: 2026-01-26 18:53:23.947051845 +0000 UTC m=+0.045196528 container create c6ac30c65031e759fd212dda645b3c9032ecad8581cf18da3eafa1d58018d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 13:53:23 np0005596060 systemd[1]: Started libpod-conmon-c6ac30c65031e759fd212dda645b3c9032ecad8581cf18da3eafa1d58018d7af.scope.
Jan 26 13:53:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:53:24 np0005596060 podman[317669]: 2026-01-26 18:53:23.929882477 +0000 UTC m=+0.028027190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:53:24 np0005596060 podman[317669]: 2026-01-26 18:53:24.027646416 +0000 UTC m=+0.125791099 container init c6ac30c65031e759fd212dda645b3c9032ecad8581cf18da3eafa1d58018d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 26 13:53:24 np0005596060 podman[317669]: 2026-01-26 18:53:24.037104192 +0000 UTC m=+0.135248895 container start c6ac30c65031e759fd212dda645b3c9032ecad8581cf18da3eafa1d58018d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 13:53:24 np0005596060 podman[317669]: 2026-01-26 18:53:24.041109611 +0000 UTC m=+0.139254384 container attach c6ac30c65031e759fd212dda645b3c9032ecad8581cf18da3eafa1d58018d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 13:53:24 np0005596060 hopeful_hoover[317685]: 167 167
Jan 26 13:53:24 np0005596060 systemd[1]: libpod-c6ac30c65031e759fd212dda645b3c9032ecad8581cf18da3eafa1d58018d7af.scope: Deactivated successfully.
Jan 26 13:53:24 np0005596060 podman[317669]: 2026-01-26 18:53:24.044610049 +0000 UTC m=+0.142754772 container died c6ac30c65031e759fd212dda645b3c9032ecad8581cf18da3eafa1d58018d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 13:53:24 np0005596060 systemd[1]: var-lib-containers-storage-overlay-b00cf7d4b43c6f9b9dca13f55f2f818f667c2cb245debf2b003dca98752682b3-merged.mount: Deactivated successfully.
Jan 26 13:53:24 np0005596060 podman[317669]: 2026-01-26 18:53:24.087064678 +0000 UTC m=+0.185209371 container remove c6ac30c65031e759fd212dda645b3c9032ecad8581cf18da3eafa1d58018d7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:53:24 np0005596060 systemd[1]: libpod-conmon-c6ac30c65031e759fd212dda645b3c9032ecad8581cf18da3eafa1d58018d7af.scope: Deactivated successfully.
Jan 26 13:53:24 np0005596060 nova_compute[247421]: 2026-01-26 18:53:24.117 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:24.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:24 np0005596060 podman[317708]: 2026-01-26 18:53:24.265365825 +0000 UTC m=+0.047804893 container create 7a34c399e726d5655e4b5855479b35ecf51b5a45b4a2f1fd420579c649805c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 13:53:24 np0005596060 systemd[1]: Started libpod-conmon-7a34c399e726d5655e4b5855479b35ecf51b5a45b4a2f1fd420579c649805c4e.scope.
Jan 26 13:53:24 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:53:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e23180546443b7a82dcef93b7d90b2576a15b62e0ea16c8c2137aea72b7b164/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e23180546443b7a82dcef93b7d90b2576a15b62e0ea16c8c2137aea72b7b164/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:24 np0005596060 podman[317708]: 2026-01-26 18:53:24.242722741 +0000 UTC m=+0.025161839 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:53:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e23180546443b7a82dcef93b7d90b2576a15b62e0ea16c8c2137aea72b7b164/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:24 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e23180546443b7a82dcef93b7d90b2576a15b62e0ea16c8c2137aea72b7b164/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:24 np0005596060 podman[317708]: 2026-01-26 18:53:24.345939445 +0000 UTC m=+0.128378523 container init 7a34c399e726d5655e4b5855479b35ecf51b5a45b4a2f1fd420579c649805c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 26 13:53:24 np0005596060 podman[317708]: 2026-01-26 18:53:24.352048548 +0000 UTC m=+0.134487596 container start 7a34c399e726d5655e4b5855479b35ecf51b5a45b4a2f1fd420579c649805c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:53:24 np0005596060 podman[317708]: 2026-01-26 18:53:24.354861408 +0000 UTC m=+0.137300486 container attach 7a34c399e726d5655e4b5855479b35ecf51b5a45b4a2f1fd420579c649805c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 26 13:53:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:24.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]: {
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:    "1": [
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:        {
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "devices": [
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "/dev/loop3"
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            ],
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "lv_name": "ceph_lv0",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "lv_size": "7511998464",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=d4cd1917-5876-51b6-bc64-65a16199754d,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=2192cb4e-a674-4139-ac32-841945fb067d,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "lv_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "name": "ceph_lv0",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "tags": {
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.block_uuid": "4DsLj5-uT3C-dJv9-RCC2-cCAY-mT3z-yZV5dr",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.cephx_lockbox_secret": "",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.cluster_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.cluster_name": "ceph",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.crush_device_class": "",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.encrypted": "0",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.osd_fsid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.osd_id": "1",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.type": "block",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:                "ceph.vdo": "0"
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            },
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "type": "block",
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:            "vg_name": "ceph_vg0"
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:        }
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]:    ]
Jan 26 13:53:25 np0005596060 happy_varahamihira[317725]: }
Jan 26 13:53:25 np0005596060 systemd[1]: libpod-7a34c399e726d5655e4b5855479b35ecf51b5a45b4a2f1fd420579c649805c4e.scope: Deactivated successfully.
Jan 26 13:53:25 np0005596060 conmon[317725]: conmon 7a34c399e726d5655e4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a34c399e726d5655e4b5855479b35ecf51b5a45b4a2f1fd420579c649805c4e.scope/container/memory.events
Jan 26 13:53:25 np0005596060 podman[317708]: 2026-01-26 18:53:25.119534552 +0000 UTC m=+0.901973640 container died 7a34c399e726d5655e4b5855479b35ecf51b5a45b4a2f1fd420579c649805c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 13:53:25 np0005596060 systemd[1]: var-lib-containers-storage-overlay-6e23180546443b7a82dcef93b7d90b2576a15b62e0ea16c8c2137aea72b7b164-merged.mount: Deactivated successfully.
Jan 26 13:53:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:25 np0005596060 podman[317708]: 2026-01-26 18:53:25.181370114 +0000 UTC m=+0.963809162 container remove 7a34c399e726d5655e4b5855479b35ecf51b5a45b4a2f1fd420579c649805c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_varahamihira, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 13:53:25 np0005596060 systemd[1]: libpod-conmon-7a34c399e726d5655e4b5855479b35ecf51b5a45b4a2f1fd420579c649805c4e.scope: Deactivated successfully.
Jan 26 13:53:25 np0005596060 podman[317885]: 2026-01-26 18:53:25.877528188 +0000 UTC m=+0.037615919 container create 18fe5bd5a5d8202ef98e1f00472574744b1571836923707e061fe80a309682d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 26 13:53:25 np0005596060 systemd[1]: Started libpod-conmon-18fe5bd5a5d8202ef98e1f00472574744b1571836923707e061fe80a309682d5.scope.
Jan 26 13:53:25 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:53:25 np0005596060 podman[317885]: 2026-01-26 18:53:25.955274458 +0000 UTC m=+0.115362209 container init 18fe5bd5a5d8202ef98e1f00472574744b1571836923707e061fe80a309682d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 13:53:25 np0005596060 podman[317885]: 2026-01-26 18:53:25.861696184 +0000 UTC m=+0.021783935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:53:25 np0005596060 nova_compute[247421]: 2026-01-26 18:53:25.993 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:26 np0005596060 podman[317885]: 2026-01-26 18:53:26.000070415 +0000 UTC m=+0.160158146 container start 18fe5bd5a5d8202ef98e1f00472574744b1571836923707e061fe80a309682d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 26 13:53:26 np0005596060 podman[317885]: 2026-01-26 18:53:26.003605243 +0000 UTC m=+0.163692984 container attach 18fe5bd5a5d8202ef98e1f00472574744b1571836923707e061fe80a309682d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 26 13:53:26 np0005596060 cranky_einstein[317901]: 167 167
Jan 26 13:53:26 np0005596060 systemd[1]: libpod-18fe5bd5a5d8202ef98e1f00472574744b1571836923707e061fe80a309682d5.scope: Deactivated successfully.
Jan 26 13:53:26 np0005596060 podman[317885]: 2026-01-26 18:53:26.007685725 +0000 UTC m=+0.167773466 container died 18fe5bd5a5d8202ef98e1f00472574744b1571836923707e061fe80a309682d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 13:53:26 np0005596060 systemd[1]: var-lib-containers-storage-overlay-c53abaf2104bd9bd2d8a632e9fb4a1b757f85178e708090e09f5a8f7898a8b54-merged.mount: Deactivated successfully.
Jan 26 13:53:26 np0005596060 podman[317885]: 2026-01-26 18:53:26.048589106 +0000 UTC m=+0.208676837 container remove 18fe5bd5a5d8202ef98e1f00472574744b1571836923707e061fe80a309682d5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_einstein, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Jan 26 13:53:26 np0005596060 systemd[1]: libpod-conmon-18fe5bd5a5d8202ef98e1f00472574744b1571836923707e061fe80a309682d5.scope: Deactivated successfully.
Jan 26 13:53:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:26.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:26 np0005596060 podman[317925]: 2026-01-26 18:53:26.210603447 +0000 UTC m=+0.042652375 container create 1bcf48391890f34c7ec6ebb29083847633d78a7da8feaf77236e5f5a333142fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:53:26 np0005596060 systemd[1]: Started libpod-conmon-1bcf48391890f34c7ec6ebb29083847633d78a7da8feaf77236e5f5a333142fd.scope.
Jan 26 13:53:26 np0005596060 systemd[1]: Started libcrun container.
Jan 26 13:53:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131eb25de73c02ebc7f48f8f55ecaf6915310e634cb4fb36e7a73f704283aa90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131eb25de73c02ebc7f48f8f55ecaf6915310e634cb4fb36e7a73f704283aa90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131eb25de73c02ebc7f48f8f55ecaf6915310e634cb4fb36e7a73f704283aa90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:26 np0005596060 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/131eb25de73c02ebc7f48f8f55ecaf6915310e634cb4fb36e7a73f704283aa90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 13:53:26 np0005596060 podman[317925]: 2026-01-26 18:53:26.195130261 +0000 UTC m=+0.027179209 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 26 13:53:26 np0005596060 podman[317925]: 2026-01-26 18:53:26.294997302 +0000 UTC m=+0.127046250 container init 1bcf48391890f34c7ec6ebb29083847633d78a7da8feaf77236e5f5a333142fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 26 13:53:26 np0005596060 podman[317925]: 2026-01-26 18:53:26.303231527 +0000 UTC m=+0.135280465 container start 1bcf48391890f34c7ec6ebb29083847633d78a7da8feaf77236e5f5a333142fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 26 13:53:26 np0005596060 podman[317925]: 2026-01-26 18:53:26.306311024 +0000 UTC m=+0.138359952 container attach 1bcf48391890f34c7ec6ebb29083847633d78a7da8feaf77236e5f5a333142fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 13:53:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:27 np0005596060 festive_nobel[317942]: {
Jan 26 13:53:27 np0005596060 festive_nobel[317942]:    "2192cb4e-a674-4139-ac32-841945fb067d": {
Jan 26 13:53:27 np0005596060 festive_nobel[317942]:        "ceph_fsid": "d4cd1917-5876-51b6-bc64-65a16199754d",
Jan 26 13:53:27 np0005596060 festive_nobel[317942]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 26 13:53:27 np0005596060 festive_nobel[317942]:        "osd_id": 1,
Jan 26 13:53:27 np0005596060 festive_nobel[317942]:        "osd_uuid": "2192cb4e-a674-4139-ac32-841945fb067d",
Jan 26 13:53:27 np0005596060 festive_nobel[317942]:        "type": "bluestore"
Jan 26 13:53:27 np0005596060 festive_nobel[317942]:    }
Jan 26 13:53:27 np0005596060 festive_nobel[317942]: }
Jan 26 13:53:27 np0005596060 systemd[1]: libpod-1bcf48391890f34c7ec6ebb29083847633d78a7da8feaf77236e5f5a333142fd.scope: Deactivated successfully.
Jan 26 13:53:27 np0005596060 podman[317963]: 2026-01-26 18:53:27.175755762 +0000 UTC m=+0.026047681 container died 1bcf48391890f34c7ec6ebb29083847633d78a7da8feaf77236e5f5a333142fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 13:53:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:27 np0005596060 systemd[1]: var-lib-containers-storage-overlay-131eb25de73c02ebc7f48f8f55ecaf6915310e634cb4fb36e7a73f704283aa90-merged.mount: Deactivated successfully.
Jan 26 13:53:27 np0005596060 podman[317963]: 2026-01-26 18:53:27.222630281 +0000 UTC m=+0.072922210 container remove 1bcf48391890f34c7ec6ebb29083847633d78a7da8feaf77236e5f5a333142fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 26 13:53:27 np0005596060 systemd[1]: libpod-conmon-1bcf48391890f34c7ec6ebb29083847633d78a7da8feaf77236e5f5a333142fd.scope: Deactivated successfully.
Jan 26 13:53:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:53:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:53:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:53:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:53:27 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 7372996e-da3b-40fe-ab24-10dd8172ab08 does not exist
Jan 26 13:53:27 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev de6b9497-bfb3-409d-bed7-8a9fd0c5c2d5 does not exist
Jan 26 13:53:27 np0005596060 ceph-mgr[74563]: [progress WARNING root] complete: ev 605a8d74-d118-4561-b0a9-37ecbdff4396 does not exist
Jan 26 13:53:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:28.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:28 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:53:28 np0005596060 ceph-mon[74267]: from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:53:28 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:28 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:28 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:28.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:29 np0005596060 nova_compute[247421]: 2026-01-26 18:53:29.119 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:29 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:30.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:30 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:30 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:30 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:30.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:30 np0005596060 nova_compute[247421]: 2026-01-26 18:53:30.995 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:31 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:31 np0005596060 nova_compute[247421]: 2026-01-26 18:53:31.669 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:31 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:32 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:32 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:32 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:32.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:32 np0005596060 nova_compute[247421]: 2026-01-26 18:53:32.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:33 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:34 np0005596060 nova_compute[247421]: 2026-01-26 18:53:34.120 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:34 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:34 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:34 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:34.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:35 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:35 np0005596060 nova_compute[247421]: 2026-01-26 18:53:35.997 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:36.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:36 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:36 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:36 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:36.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:36 np0005596060 nova_compute[247421]: 2026-01-26 18:53:36.645 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:36 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:37 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:37 np0005596060 nova_compute[247421]: 2026-01-26 18:53:37.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:37 np0005596060 nova_compute[247421]: 2026-01-26 18:53:37.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 26 13:53:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:38.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:38 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:38 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:53:38 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:38.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:53:38 np0005596060 nova_compute[247421]: 2026-01-26 18:53:38.652 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:39 np0005596060 nova_compute[247421]: 2026-01-26 18:53:39.121 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:39 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:40.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 26 13:53:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/260721855' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 26 13:53:40 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 26 13:53:40 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/260721855' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 26 13:53:40 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:40 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:40 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:40.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:41 np0005596060 nova_compute[247421]: 2026-01-26 18:53:41.000 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:41 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:41.934483) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453621934542, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1876, "num_deletes": 251, "total_data_size": 3409177, "memory_usage": 3474752, "flush_reason": "Manual Compaction"}
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453621956618, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3347568, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51216, "largest_seqno": 53091, "table_properties": {"data_size": 3339055, "index_size": 5263, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17357, "raw_average_key_size": 20, "raw_value_size": 3322103, "raw_average_value_size": 3862, "num_data_blocks": 231, "num_entries": 860, "num_filter_entries": 860, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769453427, "oldest_key_time": 1769453427, "file_creation_time": 1769453621, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 22660 microseconds, and 9336 cpu microseconds.
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:41.956704) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3347568 bytes OK
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:41.957250) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:41.958272) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:41.958285) EVENT_LOG_v1 {"time_micros": 1769453621958281, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:41.958301) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3401542, prev total WAL file size 3401542, number of live WAL files 2.
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:41.959594) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3269KB)], [116(10MB)]
Jan 26 13:53:41 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453621959671, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 13845204, "oldest_snapshot_seqno": -1}
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7676 keys, 11848342 bytes, temperature: kUnknown
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453622034698, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 11848342, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11797954, "index_size": 30116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 199154, "raw_average_key_size": 25, "raw_value_size": 11661220, "raw_average_value_size": 1519, "num_data_blocks": 1193, "num_entries": 7676, "num_filter_entries": 7676, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449131, "oldest_key_time": 0, "file_creation_time": 1769453621, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a7008efc-af18-475b-8e6d-abf0122d49b8", "db_session_id": "RT38KYIRSGE9064E0SIL", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:42.034975) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11848342 bytes
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:42.036224) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.3 rd, 157.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.0 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 8193, records dropped: 517 output_compression: NoCompression
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:42.036245) EVENT_LOG_v1 {"time_micros": 1769453622036235, "job": 70, "event": "compaction_finished", "compaction_time_micros": 75106, "compaction_time_cpu_micros": 29043, "output_level": 6, "num_output_files": 1, "total_output_size": 11848342, "num_input_records": 8193, "num_output_records": 7676, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453622037686, "job": 70, "event": "table_file_deletion", "file_number": 118}
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769453622040447, "job": 70, "event": "table_file_deletion", "file_number": 116}
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:41.959446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:42.040503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:42.040507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:42.040509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:42.040511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:53:42 np0005596060 ceph-mon[74267]: rocksdb: (Original Log Time 2026/01/26-18:53:42.040513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 13:53:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:42.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:42 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:42 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:42 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:42.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:42 np0005596060 nova_compute[247421]: 2026-01-26 18:53:42.651 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:42 np0005596060 nova_compute[247421]: 2026-01-26 18:53:42.651 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 26 13:53:42 np0005596060 nova_compute[247421]: 2026-01-26 18:53:42.652 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 26 13:53:42 np0005596060 nova_compute[247421]: 2026-01-26 18:53:42.664 247428 DEBUG nova.compute.manager [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 26 13:53:43 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:43 np0005596060 nova_compute[247421]: 2026-01-26 18:53:43.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:43 np0005596060 nova_compute[247421]: 2026-01-26 18:53:43.686 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:53:43 np0005596060 nova_compute[247421]: 2026-01-26 18:53:43.686 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:53:43 np0005596060 nova_compute[247421]: 2026-01-26 18:53:43.686 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:53:43 np0005596060 nova_compute[247421]: 2026-01-26 18:53:43.687 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 26 13:53:43 np0005596060 nova_compute[247421]: 2026-01-26 18:53:43.687 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.123 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:53:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3147638831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:53:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.150 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:53:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:44.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Optimize plan auto_2026-01-26_18:53:44
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] do_upmap
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', '.mgr', 'default.rgw.log', 'backups', '.rgw.root', 'volumes']
Jan 26 13:53:44 np0005596060 ceph-mgr[74563]: [balancer INFO root] prepared 0/10 changes
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.305 247428 WARNING nova.virt.libvirt.driver [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.306 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4564MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.306 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.306 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.380 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.381 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.433 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing inventories for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.493 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating ProviderTree inventory for provider c679f5ea-e093-4909-bb04-0342c8551a8f from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.494 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Updating inventory in ProviderTree for provider c679f5ea-e093-4909-bb04-0342c8551a8f with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.506 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing aggregate associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.526 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Refreshing trait associations for resource provider c679f5ea-e093-4909-bb04-0342c8551a8f, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,COMPUTE_NODE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_ACCELERATORS,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.542 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 26 13:53:44 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:44 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:44 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:44.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:44 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 26 13:53:44 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310985466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.968 247428 DEBUG oslo_concurrency.processutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.974 247428 DEBUG nova.compute.provider_tree [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed in ProviderTree for provider: c679f5ea-e093-4909-bb04-0342c8551a8f update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.988 247428 DEBUG nova.scheduler.client.report [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Inventory has not changed for provider c679f5ea-e093-4909-bb04-0342c8551a8f based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.990 247428 DEBUG nova.compute.resource_tracker [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 26 13:53:44 np0005596060 nova_compute[247421]: 2026-01-26 18:53:44.990 247428 DEBUG oslo_concurrency.lockutils [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 13:53:45 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:45 np0005596060 nova_compute[247421]: 2026-01-26 18:53:45.991 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:46 np0005596060 nova_compute[247421]: 2026-01-26 18:53:46.041 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:46.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:46 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:46 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:46 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:46.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:46 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:47 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:47 np0005596060 nova_compute[247421]: 2026-01-26 18:53:47.650 247428 DEBUG oslo_service.periodic_task [None req-4123bf43-99c8-405c-b99a-488ad64b5739 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 26 13:53:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:48.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:48 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 13:53:48 np0005596060 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 13:53:48 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:48 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:48 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:48.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:49 np0005596060 podman[318156]: 2026-01-26 18:53:49.015852001 +0000 UTC m=+0.059519836 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 26 13:53:49 np0005596060 podman[318165]: 2026-01-26 18:53:49.068459503 +0000 UTC m=+0.111265676 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 13:53:49 np0005596060 nova_compute[247421]: 2026-01-26 18:53:49.124 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:49 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:50.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:50 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:50 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:50 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:50.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:51 np0005596060 nova_compute[247421]: 2026-01-26 18:53:51.042 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:51 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:51 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:52.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:52 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:52 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:52 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:52.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:53 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:54 np0005596060 nova_compute[247421]: 2026-01-26 18:53:54.126 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:54.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:54 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:54 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:54 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:54.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:55 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:56 np0005596060 nova_compute[247421]: 2026-01-26 18:53:56.044 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:56.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:56 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:56 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:56 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:56.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:56 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:53:57 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:53:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:53:58.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:58 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:53:58 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:53:58 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:53:58.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:53:59 np0005596060 nova_compute[247421]: 2026-01-26 18:53:59.128 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:53:59 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:00.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:00 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:00 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:00 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:00.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:01 np0005596060 nova_compute[247421]: 2026-01-26 18:54:01.046 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:01 np0005596060 systemd-logind[786]: New session 55 of user zuul.
Jan 26 13:54:01 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:01 np0005596060 systemd[1]: Started Session 55 of User zuul.
Jan 26 13:54:01 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:54:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:02.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:02 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:02 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:02 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:02.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:03 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:03 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21339 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:03 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29513 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:04 np0005596060 nova_compute[247421]: 2026-01-26 18:54:04.131 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 16)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21345 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:04 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29519 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:04.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:04 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 26 13:54:04 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2549722290' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 26 13:54:04 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:04 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:04 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:04.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:05 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:06 np0005596060 nova_compute[247421]: 2026-01-26 18:54:06.048 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:06 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:06.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:06 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31180 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:06 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:06 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:06 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:06.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:06 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:54:07 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:08.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:08 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:08 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:08 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:08.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:09 np0005596060 nova_compute[247421]: 2026-01-26 18:54:09.132 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:09 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:09 np0005596060 ovs-vsctl[318617]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 26 13:54:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:10.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:10 np0005596060 virtqemud[246749]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 26 13:54:10 np0005596060 virtqemud[246749]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 26 13:54:10 np0005596060 virtqemud[246749]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 26 13:54:10 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:10 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:10 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:10.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:11 np0005596060 nova_compute[247421]: 2026-01-26 18:54:11.049 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:11 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:11 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: cache status {prefix=cache status} (starting...)
Jan 26 13:54:11 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:11 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: client ls {prefix=client ls} (starting...)
Jan 26 13:54:11 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:11 np0005596060 lvm[318967]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 13:54:11 np0005596060 lvm[318967]: VG ceph_vg0 finished
Jan 26 13:54:11 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29531 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:11 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21360 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:11 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: damage ls {prefix=damage ls} (starting...)
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:12 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29540 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:12 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21369 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: dump loads {prefix=dump loads} (starting...)
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 26 13:54:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 13:54:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 26 13:54:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3053944643' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 26 13:54:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720711262' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 26 13:54:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:54:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:12.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 26 13:54:12 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:12 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29567 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:12 np0005596060 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T18:54:12.857+0000 7f4edf408640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:12 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21387 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:12 np0005596060 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:12 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T18:54:12.913+0000 7f4edf408640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:12 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:12 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:12 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:12.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:12 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 26 13:54:12 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/83354201' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 26 13:54:13 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 26 13:54:13 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:13 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 26 13:54:13 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:13 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 26 13:54:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3810567449' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 26 13:54:13 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: ops {prefix=ops} (starting...)
Jan 26 13:54:13 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 26 13:54:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2886563842' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 26 13:54:13 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29600 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:13 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 26 13:54:13 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1870389608' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 13:54:13 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21417 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:14 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: session ls {prefix=session ls} (starting...)
Jan 26 13:54:14 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv Can't run that command on an inactive MDS!
Jan 26 13:54:14 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29612 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 13:54:14 np0005596060 ceph-mgr[74563]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 13:54:14 np0005596060 nova_compute[247421]: 2026-01-26 18:54:14.133 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 26 13:54:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511468281' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 13:54:14 np0005596060 ceph-mds[93477]: mds.cephfs.compute-0.wenkwv asok_command: status {prefix=status} (starting...)
Jan 26 13:54:14 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21429 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 26 13:54:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 13:54:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 26 13:54:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3247393312' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 13:54:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:14.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:14 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 26 13:54:14 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256584565' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 13:54:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:54:14.780 159331 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 26 13:54:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:54:14.781 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 26 13:54:14 np0005596060 ovn_metadata_agent[159326]: 2026-01-26 18:54:14.781 159331 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 26 13:54:14 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:14 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:14 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:14.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2567415867' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253360796' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 26 13:54:15 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:15 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31195 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:15 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29648 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:15 np0005596060 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 13:54:15 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T18:54:15.381+0000 7f4edf408640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 13:54:15 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21468 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:15 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T18:54:15.436+0000 7f4edf408640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 13:54:15 np0005596060 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109652598' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 26 13:54:15 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31210 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3071890798' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 26 13:54:15 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918735967' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 26 13:54:16 np0005596060 nova_compute[247421]: 2026-01-26 18:54:16.051 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:16 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29675 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:16 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21495 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 26 13:54:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/936815392' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 26 13:54:16 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31237 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:16 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T18:54:16.458+0000 7f4edf408640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:16 np0005596060 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:16 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29687 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:16 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21507 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:16.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 26 13:54:16 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4105314783' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 26 13:54:16 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29699 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:16 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:16 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:54:16 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:16.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:54:16 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:54:17 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21519 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:17 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 26 13:54:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/108397537' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 26 13:54:17 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29717 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:17 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31264 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:17 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21531 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:17 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 26 13:54:17 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627475436' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 26 13:54:17 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:17 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31270 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:17 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21546 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510171 data_alloc: 234881024 data_used: 18661376
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 25919488 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b926c000/0x0/0x1bfc00000, data 0x1adbbca/0x1be2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 25919488 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 185 ms_handle_reset con 0x556c77cd2400 session 0x556c77b38f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 25919488 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 25919488 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 25919488 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510257 data_alloc: 234881024 data_used: 18661376
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131088384 unmapped: 25919488 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b926b000/0x0/0x1bfc00000, data 0x1adbc2c/0x1be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 25911296 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 25911296 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131096576 unmapped: 25911296 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131104768 unmapped: 25903104 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b926b000/0x0/0x1bfc00000, data 0x1adbc2c/0x1be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510257 data_alloc: 234881024 data_used: 18661376
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131104768 unmapped: 25903104 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131104768 unmapped: 25903104 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131112960 unmapped: 25894912 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 185 heartbeat osd_stat(store_statfs(0x1b926b000/0x0/0x1bfc00000, data 0x1adbc2c/0x1be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131112960 unmapped: 25894912 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131112960 unmapped: 25894912 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510257 data_alloc: 234881024 data_used: 18661376
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131112960 unmapped: 25894912 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 131112960 unmapped: 25894912 heap: 157007872 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.320089340s of 20.907712936s, submitted: 51
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b926b000/0x0/0x1bfc00000, data 0x1adbc2c/0x1be3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c78c74400 session 0x556c773b7e00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 33202176 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 33202176 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 33202176 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1623775 data_alloc: 234881024 data_used: 18669568
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 33202176 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 33202176 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 33202176 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8267000/0x0/0x1bfc00000, data 0x2add885/0x2be6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 33202176 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c78c74800 session 0x556c77c04000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c797a8400 session 0x556c77c05860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74836c00 session 0x556c79657680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132227072 unmapped: 33177600 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8267000/0x0/0x1bfc00000, data 0x2add895/0x2be7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1626197 data_alloc: 234881024 data_used: 18669568
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132227072 unmapped: 33177600 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132235264 unmapped: 33169408 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.108837128s of 10.147722244s, submitted: 9
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c77cd2400 session 0x556c77459e00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132235264 unmapped: 33169408 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132235264 unmapped: 33169408 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 33161216 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8265000/0x0/0x1bfc00000, data 0x2add907/0x2be9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1631105 data_alloc: 234881024 data_used: 18669568
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 33161216 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 33161216 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 33161216 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 33161216 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8265000/0x0/0x1bfc00000, data 0x2add907/0x2be9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 33161216 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c78c74400 session 0x556c77402b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667105 data_alloc: 251658240 data_used: 28696576
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144179200 unmapped: 21225472 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8265000/0x0/0x1bfc00000, data 0x2add907/0x2be9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667105 data_alloc: 251658240 data_used: 28696576
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8265000/0x0/0x1bfc00000, data 0x2add907/0x2be9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1667105 data_alloc: 251658240 data_used: 28696576
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8265000/0x0/0x1bfc00000, data 0x2add907/0x2be9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.161310196s of 21.161310196s, submitted: 0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c78c74800 session 0x556c753b4d20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143802368 unmapped: 21602304 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1670009 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c797a8400 session 0x556c755ef860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 21643264 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74836c00 session 0x556c747f0780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143761408 unmapped: 21643264 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b7e44000/0x0/0x1bfc00000, data 0x2add969/0x2bea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x51cf9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c77cd2400 session 0x556c773132c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143638528 unmapped: 21766144 heap: 165404672 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c78c74800 session 0x556c75536960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74f16400 session 0x556c77c043c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 151232512 unmapped: 17850368 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c78c74400 session 0x556c747d43c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144130048 unmapped: 24952832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8b97000/0x0/0x1bfc00000, data 0x31cb907/0x32d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74836c00 session 0x556c747d45a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1727471 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144130048 unmapped: 24952832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144130048 unmapped: 24952832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8b96000/0x0/0x1bfc00000, data 0x31cb917/0x32d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144130048 unmapped: 24952832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.723974228s of 10.226922035s, submitted: 39
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74f16400 session 0x556c747d54a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144130048 unmapped: 24952832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c78c74800 session 0x556c7565a5a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8b96000/0x0/0x1bfc00000, data 0x31cb917/0x32d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144138240 unmapped: 24944640 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8b95000/0x0/0x1bfc00000, data 0x31cb940/0x32d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [0,0,0,0,0,1,1,5,12])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8c000 session 0x556c7565a000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c77cd2400 session 0x556c77e0a3c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1778824 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144367616 unmapped: 24715264 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74836c00 session 0x556c77e0a5a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b85b5000/0x0/0x1bfc00000, data 0x37ac930/0x38b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74f16400 session 0x556c773b6f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8c000 session 0x556c77e0a960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b85b5000/0x0/0x1bfc00000, data 0x37ac930/0x38b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 24002560 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8343000/0x0/0x1bfc00000, data 0x3a1d979/0x3b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 24002560 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8343000/0x0/0x1bfc00000, data 0x3a1d979/0x3b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 24002560 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 24002560 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1807304 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 24002560 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 24002560 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145080320 unmapped: 24002560 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145088512 unmapped: 23994368 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8343000/0x0/0x1bfc00000, data 0x3a1d979/0x3b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145088512 unmapped: 23994368 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1807304 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145088512 unmapped: 23994368 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145096704 unmapped: 23986176 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145096704 unmapped: 23986176 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8343000/0x0/0x1bfc00000, data 0x3a1d979/0x3b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 23969792 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 23969792 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8343000/0x0/0x1bfc00000, data 0x3a1d979/0x3b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1807304 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 23969792 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 23969792 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8343000/0x0/0x1bfc00000, data 0x3a1d979/0x3b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 23969792 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8343000/0x0/0x1bfc00000, data 0x3a1d979/0x3b2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 23969792 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 23969792 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1807304 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 23969792 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.749660492s of 23.216871262s, submitted: 50
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 23969792 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c78c74800 session 0x556c777a8b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145121280 unmapped: 23961600 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8c400 session 0x556c777a8d20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74836c00 session 0x556c777a9680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145129472 unmapped: 23953408 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b8345000/0x0/0x1bfc00000, data 0x3a1d907/0x3b29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74f16400 session 0x556c777a85a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145129472 unmapped: 23953408 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8c000 session 0x556c777a8960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8c400 session 0x556c75a75c20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1733362 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c78c74800 session 0x556c774e7680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 23920640 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 23920640 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74836c00 session 0x556c77aee1e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145170432 unmapped: 23912448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 23896064 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74f16400 session 0x556c77aef860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b91f6000/0x0/0x1bfc00000, data 0x259a833/0x26a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 23896064 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8c000 session 0x556c777a8780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1634947 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 23887872 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.600337029s of 10.284934044s, submitted: 70
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8c400 session 0x556c77e0af00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 23887872 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8c800 session 0x556c77e0bc20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145154048 unmapped: 23928832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b97cd000/0x0/0x1bfc00000, data 0x259a7c1/0x26a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145154048 unmapped: 23928832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145154048 unmapped: 23928832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74836c00 session 0x556c75a75680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1632785 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145154048 unmapped: 23928832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145154048 unmapped: 23928832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74f16400 session 0x556c753b45a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145154048 unmapped: 23928832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b97cd000/0x0/0x1bfc00000, data 0x259a7c1/0x26a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8c000 session 0x556c77f51c20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 23920640 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145162240 unmapped: 23920640 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8c400 session 0x556c75b2e1e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1632311 data_alloc: 251658240 data_used: 29745152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c76a8cc00 session 0x556c77c04b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 23896064 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 23896064 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.662204742s of 10.913847923s, submitted: 45
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 ms_handle_reset con 0x556c74836c00 session 0x556c77e7ad20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 23896064 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b97ce000/0x0/0x1bfc00000, data 0x259a7b1/0x26a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 23896064 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 23887872 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 186 handle_osd_map epochs [186,187], i have 186, src has [1,187]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1635670 data_alloc: 251658240 data_used: 29753344
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 146251776 unmapped: 22831104 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 187 ms_handle_reset con 0x556c74f16400 session 0x556c77aef0e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b97ce000/0x0/0x1bfc00000, data 0x259a7b1/0x26a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1487558 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 187 heartbeat osd_stat(store_statfs(0x1ba7ca000/0x0/0x1bfc00000, data 0x159c45e/0x16a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 187 heartbeat osd_stat(store_statfs(0x1ba7ca000/0x0/0x1bfc00000, data 0x159c45e/0x16a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 187 heartbeat osd_stat(store_statfs(0x1ba7ca000/0x0/0x1bfc00000, data 0x159c45e/0x16a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.947020531s of 11.004800797s, submitted: 19
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490532 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490532 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490532 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490532 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490532 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490532 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490532 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 35192832 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 35184640 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133898240 unmapped: 35184640 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490532 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490532 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1490532 data_alloc: 234881024 data_used: 14024704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 heartbeat osd_stat(store_statfs(0x1ba7c7000/0x0/0x1bfc00000, data 0x159df9d/0x16a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133906432 unmapped: 35176448 heap: 169082880 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 48.659267426s of 48.675277710s, submitted: 13
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133971968 unmapped: 43507712 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 189 ms_handle_reset con 0x556c76a8c000 session 0x556c774e6780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132448256 unmapped: 45031424 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132448256 unmapped: 45031424 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1553942 data_alloc: 234881024 data_used: 14032896
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132456448 unmapped: 45023232 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132456448 unmapped: 45023232 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 189 heartbeat osd_stat(store_statfs(0x1b9fc2000/0x0/0x1bfc00000, data 0x1d9fc29/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 189 ms_handle_reset con 0x556c76a8c400 session 0x556c77c7a960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 44998656 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 44998656 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 189 ms_handle_reset con 0x556c76a8d000 session 0x556c77402960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 189 ms_handle_reset con 0x556c74836c00 session 0x556c774ea780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132481024 unmapped: 44998656 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 189 heartbeat osd_stat(store_statfs(0x1b9fc1000/0x0/0x1bfc00000, data 0x1d9fc29/0x1eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1552934 data_alloc: 234881024 data_used: 14036992
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132497408 unmapped: 44982272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132521984 unmapped: 44957696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.537178993s of 10.109189987s, submitted: 70
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132546560 unmapped: 44933120 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 190 ms_handle_reset con 0x556c74f16400 session 0x556c77403e00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 190 heartbeat osd_stat(store_statfs(0x1ba7c0000/0x0/0x1bfc00000, data 0x15a18a3/0x16ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132546560 unmapped: 44933120 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132546560 unmapped: 44933120 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 191 ms_handle_reset con 0x556c76a8c000 session 0x556c77aba5a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1559273 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 191 ms_handle_reset con 0x556c77f8e800 session 0x556c74cf83c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 191 heartbeat osd_stat(store_statfs(0x1b9fbe000/0x0/0x1bfc00000, data 0x1da3518/0x1eaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 191 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1562247 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 192 heartbeat osd_stat(store_statfs(0x1b9fbb000/0x0/0x1bfc00000, data 0x1da5057/0x1eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 192 heartbeat osd_stat(store_statfs(0x1b9fbb000/0x0/0x1bfc00000, data 0x1da5057/0x1eb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.586647987s of 13.252228737s, submitted: 29
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 193 ms_handle_reset con 0x556c76a8d400 session 0x556c77abba40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511645 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 193 heartbeat osd_stat(store_statfs(0x1ba7b8000/0x0/0x1bfc00000, data 0x15a6d04/0x16b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 193 heartbeat osd_stat(store_statfs(0x1ba7b8000/0x0/0x1bfc00000, data 0x15a6d04/0x16b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511645 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 193 heartbeat osd_stat(store_statfs(0x1ba7b8000/0x0/0x1bfc00000, data 0x15a6d04/0x16b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514619 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514619 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514619 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514619 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514619 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514619 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514619 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132562944 unmapped: 44916736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514619 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b5000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8d800 session 0x556c75536f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1514619 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74836c00 session 0x556c75a76780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 52.678142548s of 52.742111206s, submitted: 28
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74f16400 session 0x556c75a77e00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b3000/0x0/0x1bfc00000, data 0x15a88b5/0x16ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8c000 session 0x556c77a99a40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1516604 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132571136 unmapped: 44908544 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8d400 session 0x556c77a99680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132751360 unmapped: 44728320 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8dc00 session 0x556c77a98d20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74836c00 session 0x556c77c052c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74f16400 session 0x556c774465a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1ba7b6000/0x0/0x1bfc00000, data 0x15a8843/0x16b8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8c000 session 0x556c74320b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619568 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619568 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619568 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 44072960 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 44064768 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 44064768 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 44064768 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 44064768 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c79837c00 session 0x556c755afc20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619568 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 44064768 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 44064768 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 44064768 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133414912 unmapped: 44064768 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 44056576 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619568 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 44056576 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 44056576 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 44056576 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 44056576 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 44056576 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1619568 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 44056576 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 44056576 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c79837c00 session 0x556c77a98780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 44056576 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8dc00 session 0x556c77a98f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133423104 unmapped: 44056576 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 36.203277588s of 36.518817902s, submitted: 146
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74836c00 session 0x556c774e5680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74f16400 session 0x556c77aee5a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 44048384 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1618860 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a80000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 44048384 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 44048384 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8c000 session 0x556c777a83c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133431296 unmapped: 44048384 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8dc00 session 0x556c77b390e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c79837c00 session 0x556c774e70e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 44040192 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133439488 unmapped: 44040192 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a80000/0x0/0x1bfc00000, data 0x22dd853/0x23ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74f16400 session 0x556c773b7a40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74836c00 session 0x556c774e52c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1620224 data_alloc: 234881024 data_used: 14049280
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133734400 unmapped: 43745280 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8c000 session 0x556c77c054a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8dc00 session 0x556c77b38000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c79837c00 session 0x556c77312b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b9a7f000/0x0/0x1bfc00000, data 0x22dd863/0x23ef000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 42688512 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b8eab000/0x0/0x1bfc00000, data 0x2eb0863/0x2fc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 42688512 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74836c00 session 0x556c774030e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74f16400 session 0x556c77411a40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134725632 unmapped: 42754048 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8c000 session 0x556c74d62000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c76a8dc00 session 0x556c74d632c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b8eaa000/0x0/0x1bfc00000, data 0x2eb088c/0x2fc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1801946 data_alloc: 234881024 data_used: 14053376
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134569984 unmapped: 42909696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.449951172s of 14.703312874s, submitted: 208
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c77842400 session 0x556c7565be00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 ms_handle_reset con 0x556c74836c00 session 0x556c747d54a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 42647552 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 196 ms_handle_reset con 0x556c74f16400 session 0x556c774e6780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 196 heartbeat osd_stat(store_statfs(0x1b9055000/0x0/0x1bfc00000, data 0x2d068c5/0x2e19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1724522 data_alloc: 234881024 data_used: 14065664
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134889472 unmapped: 42590208 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 197 ms_handle_reset con 0x556c76a8c000 session 0x556c77e7a3c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 42106880 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 198 ms_handle_reset con 0x556c76a8dc00 session 0x556c75a770e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 198 ms_handle_reset con 0x556c77842800 session 0x556c74bfc1e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 198 ms_handle_reset con 0x556c77842000 session 0x556c75a76b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135438336 unmapped: 42041344 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 198 ms_handle_reset con 0x556c74836c00 session 0x556c77411860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 198 ms_handle_reset con 0x556c76a8c000 session 0x556c77b38f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 41828352 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 198 ms_handle_reset con 0x556c74f16400 session 0x556c77e0ad20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135651328 unmapped: 41828352 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b8eb2000/0x0/0x1bfc00000, data 0x2ea2210/0x2fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1751902 data_alloc: 234881024 data_used: 14061568
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 198 ms_handle_reset con 0x556c77842c00 session 0x556c755ef860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135667712 unmapped: 41811968 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 198 ms_handle_reset con 0x556c74836c00 session 0x556c77aba000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 199 ms_handle_reset con 0x556c74f16400 session 0x556c75a750e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 199 ms_handle_reset con 0x556c76a8c000 session 0x556c74cf83c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 199 ms_handle_reset con 0x556c77842000 session 0x556c774465a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 199 ms_handle_reset con 0x556c76a8dc00 session 0x556c77dc2f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133160960 unmapped: 44318720 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133185536 unmapped: 44294144 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133185536 unmapped: 44294144 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133185536 unmapped: 44294144 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 200 heartbeat osd_stat(store_statfs(0x1b86d7000/0x0/0x1bfc00000, data 0x3677a26/0x3795000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1821178 data_alloc: 234881024 data_used: 14073856
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133185536 unmapped: 44294144 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133185536 unmapped: 44294144 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.652902603s of 13.820633888s, submitted: 335
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 200 ms_handle_reset con 0x556c74836c00 session 0x556c77446d20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132538368 unmapped: 44941312 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 200 heartbeat osd_stat(store_statfs(0x1b86d8000/0x0/0x1bfc00000, data 0x3677a88/0x3796000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132546560 unmapped: 44933120 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 200 handle_osd_map epochs [200,201], i have 200, src has [1,201]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 201 ms_handle_reset con 0x556c74f16400 session 0x556c79657680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1825750 data_alloc: 234881024 data_used: 14086144
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132603904 unmapped: 44875776 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 201 ms_handle_reset con 0x556c76a8c000 session 0x556c75536f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132669440 unmapped: 44810240 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 201 handle_osd_map epochs [201,202], i have 201, src has [1,202]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 ms_handle_reset con 0x556c77843000 session 0x556c747d5860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 ms_handle_reset con 0x556c77842c00 session 0x556c79656b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 ms_handle_reset con 0x556c77843400 session 0x556c77abaf00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 heartbeat osd_stat(store_statfs(0x1b82c2000/0x0/0x1bfc00000, data 0x367b470/0x379a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [0,1])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132767744 unmapped: 44711936 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 ms_handle_reset con 0x556c77842000 session 0x556c77e0ba40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 ms_handle_reset con 0x556c74f16400 session 0x556c77c050e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 ms_handle_reset con 0x556c76a8c000 session 0x556c755ee000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 ms_handle_reset con 0x556c77842c00 session 0x556c741114a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 ms_handle_reset con 0x556c74836c00 session 0x556c753b4d20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 ms_handle_reset con 0x556c77843000 session 0x556c774e74a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1584714 data_alloc: 234881024 data_used: 14073856
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 heartbeat osd_stat(store_statfs(0x1b9cc2000/0x0/0x1bfc00000, data 0x15b6c93/0x16d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.956377029s of 10.705276489s, submitted: 196
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1588536 data_alloc: 234881024 data_used: 14082048
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1588536 data_alloc: 234881024 data_used: 14082048
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 44670976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 44662784 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 44662784 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 44662784 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1588536 data_alloc: 234881024 data_used: 14082048
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 44662784 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 44662784 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 44662784 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 44662784 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 44662784 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1588536 data_alloc: 234881024 data_used: 14082048
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 44662784 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132816896 unmapped: 44662784 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.707324982s of 18.719854355s, submitted: 24
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77842c00 session 0x556c774eb2c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132825088 unmapped: 44654592 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b881a/0x16d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132825088 unmapped: 44654592 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c74f16400 session 0x556c77e6ad20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132825088 unmapped: 44654592 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba389000/0x0/0x1bfc00000, data 0x15b887c/0x16d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1592740 data_alloc: 234881024 data_used: 14082048
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132833280 unmapped: 44646400 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba389000/0x0/0x1bfc00000, data 0x15b887c/0x16d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132833280 unmapped: 44646400 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132833280 unmapped: 44646400 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 132833280 unmapped: 44646400 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba389000/0x0/0x1bfc00000, data 0x15b887c/0x16d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133111808 unmapped: 44367872 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c76a8c000 session 0x556c77b38780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c74836c00 session 0x556c774021e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77842c00 session 0x556c773b63c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1639795 data_alloc: 234881024 data_used: 14082048
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133111808 unmapped: 44367872 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843000 session 0x556c77402b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77842000 session 0x556c77459a40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c74f16400 session 0x556c75c714a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133128192 unmapped: 44351488 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.735950470s of 10.831986427s, submitted: 22
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 136806400 unmapped: 40673280 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1b9e62000/0x0/0x1bfc00000, data 0x1adf87c/0x1bfc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c74836c00 session 0x556c774590e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 44343296 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133136384 unmapped: 44343296 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77842000 session 0x556c74321c20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77842c00 session 0x556c755afc20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1668105 data_alloc: 234881024 data_used: 14082048
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133152768 unmapped: 44326912 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1b9ae0000/0x0/0x1bfc00000, data 0x1e6281a/0x1f7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843400 session 0x556c77abba40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133152768 unmapped: 44326912 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1b9adf000/0x0/0x1bfc00000, data 0x1e6283d/0x1f7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133152768 unmapped: 44326912 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133152768 unmapped: 44326912 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843800 session 0x556c77abbe00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843c00 session 0x556c772e2960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 44089344 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1b9abb000/0x0/0x1bfc00000, data 0x1e86850/0x1fa3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1709405 data_alloc: 234881024 data_used: 19390464
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134856704 unmapped: 42622976 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 39657472 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 39657472 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 39657472 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 39657472 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735005 data_alloc: 234881024 data_used: 23060480
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 137822208 unmapped: 39657472 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77842000 session 0x556c74320960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c74836c00 session 0x556c75b2ef00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.098026276s of 13.564885139s, submitted: 42
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1b9aba000/0x0/0x1bfc00000, data 0x1e86860/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 41074688 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843400 session 0x556c79656960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 41074688 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77842c00 session 0x556c77e0b4a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 41074688 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 41074688 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1688050 data_alloc: 234881024 data_used: 19398656
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843000 session 0x556c772e3e00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 41074688 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1b9e20000/0x0/0x1bfc00000, data 0x1b1f8a0/0x1c3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 41074688 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1b9e21000/0x0/0x1bfc00000, data 0x1b1f8a0/0x1c3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,5])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c74836c00 session 0x556c77a985a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134545408 unmapped: 42934272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134545408 unmapped: 42934272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba348000/0x0/0x1bfc00000, data 0x15f887d/0x1715000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134545408 unmapped: 42934272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1607852 data_alloc: 234881024 data_used: 14086144
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134545408 unmapped: 42934272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134545408 unmapped: 42934272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77842000 session 0x556c753b52c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.692597389s of 10.763281822s, submitted: 51
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843400 session 0x556c743214a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843c00 session 0x556c75c70b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1603694 data_alloc: 218103808 data_used: 14082048
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c74836c00 session 0x556c747d4960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77842000 session 0x556c77c7ba40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134553600 unmapped: 42926080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 heartbeat osd_stat(store_statfs(0x1ba38a000/0x0/0x1bfc00000, data 0x15b880a/0x16d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843000 session 0x556c77c041e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1605147 data_alloc: 218103808 data_used: 14082048
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843400 session 0x556c74bfd2c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 134578176 unmapped: 42901504 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c77843c00 session 0x556c75a9de00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 ms_handle_reset con 0x556c74836c00 session 0x556c77e0a960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135012352 unmapped: 42467328 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135012352 unmapped: 42467328 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.012305260s of 11.401415825s, submitted: 62
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135012352 unmapped: 42467328 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77842000 session 0x556c773b8f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9a06000/0x0/0x1bfc00000, data 0x1f3a4c5/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135012352 unmapped: 42467328 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1685350 data_alloc: 218103808 data_used: 14090240
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135012352 unmapped: 42467328 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135012352 unmapped: 42467328 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9a06000/0x0/0x1bfc00000, data 0x1f3a4c5/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135012352 unmapped: 42467328 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135020544 unmapped: 42459136 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9a06000/0x0/0x1bfc00000, data 0x1f3a4c5/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135020544 unmapped: 42459136 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1685350 data_alloc: 218103808 data_used: 14090240
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135020544 unmapped: 42459136 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135020544 unmapped: 42459136 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135020544 unmapped: 42459136 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9a06000/0x0/0x1bfc00000, data 0x1f3a4c5/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135020544 unmapped: 42459136 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 42450944 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1685350 data_alloc: 218103808 data_used: 14090240
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 42450944 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 42450944 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 42450944 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 42450944 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9a06000/0x0/0x1bfc00000, data 0x1f3a4c5/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135036928 unmapped: 42442752 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9a06000/0x0/0x1bfc00000, data 0x1f3a4c5/0x2057000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.422998428s of 17.443885803s, submitted: 7
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1687178 data_alloc: 218103808 data_used: 14090240
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135036928 unmapped: 42442752 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77843400 session 0x556c747d5680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77f8e800 session 0x556c77aeed20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c74f17c00 session 0x556c773b9c20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c74836c00 session 0x556c79657680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77842000 session 0x556c773b9a40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 41885696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77843400 session 0x556c774465a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77843000 session 0x556c774ea3c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 41885696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77f8e800 session 0x556c7565b680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 41885696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 41885696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1731606 data_alloc: 218103808 data_used: 14094336
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 41885696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9566000/0x0/0x1bfc00000, data 0x23d660c/0x24f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c74836c00 session 0x556c743210e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 41885696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 41885696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 41885696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77842000 session 0x556c74321e00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135593984 unmapped: 41885696 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9566000/0x0/0x1bfc00000, data 0x23d660c/0x24f8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77843000 session 0x556c755361e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77843400 session 0x556c778a4b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.017614365s of 10.002804756s, submitted: 39
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77b99000 session 0x556c77c04d20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1736008 data_alloc: 218103808 data_used: 14094336
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135749632 unmapped: 41730048 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c74836c00 session 0x556c774e43c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77842000 session 0x556c77e7b2c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77843000 session 0x556c747d50e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 135782400 unmapped: 41697280 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 136044544 unmapped: 41435136 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9540000/0x0/0x1bfc00000, data 0x23fa63f/0x251e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 138469376 unmapped: 39010304 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142483456 unmapped: 34996224 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9540000/0x0/0x1bfc00000, data 0x23fa63f/0x251e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1832077 data_alloc: 234881024 data_used: 27217920
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142499840 unmapped: 34979840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142499840 unmapped: 34979840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9540000/0x0/0x1bfc00000, data 0x23fa63f/0x251e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77e60800 session 0x556c74d62000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142532608 unmapped: 34947072 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c74e25c00 session 0x556c77f503c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142540800 unmapped: 34938880 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77e60400 session 0x556c7565b680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9540000/0x0/0x1bfc00000, data 0x23fa63f/0x251e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c74836c00 session 0x556c77a985a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142589952 unmapped: 34889728 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1827175 data_alloc: 234881024 data_used: 27213824
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142589952 unmapped: 34889728 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142589952 unmapped: 34889728 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142606336 unmapped: 34873344 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.490794182s of 12.689890862s, submitted: 27
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b9581000/0x0/0x1bfc00000, data 0x23ba5a9/0x24db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,4,11])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 148774912 unmapped: 28704768 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 148430848 unmapped: 29048832 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1879617 data_alloc: 234881024 data_used: 27271168
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 148635648 unmapped: 28844032 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77843000 session 0x556c75b2ef00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77842000 session 0x556c75b2f2c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 heartbeat osd_stat(store_statfs(0x1b8eb7000/0x0/0x1bfc00000, data 0x2a865a9/0x2ba7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 28254208 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 150175744 unmapped: 27303936 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 150175744 unmapped: 27303936 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 ms_handle_reset con 0x556c77e60800 session 0x556c755aeb40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 150183936 unmapped: 27295744 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1900548 data_alloc: 234881024 data_used: 29192192
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 205 ms_handle_reset con 0x556c74836c00 session 0x556c77aef4a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 150200320 unmapped: 27279360 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 150200320 unmapped: 27279360 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 205 heartbeat osd_stat(store_statfs(0x1b8e17000/0x0/0x1bfc00000, data 0x2b261e4/0x2c46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [0,0,0,0,0,0,5])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 205 ms_handle_reset con 0x556c77843000 session 0x556c77f51860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 143900672 unmapped: 33579008 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.668909550s of 10.000268936s, submitted: 129
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144154624 unmapped: 33325056 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144154624 unmapped: 33325056 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1762442 data_alloc: 218103808 data_used: 19341312
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144154624 unmapped: 33325056 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 205 ms_handle_reset con 0x556c77842000 session 0x556c74cf92c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 205 ms_handle_reset con 0x556c77e60800 session 0x556c75a9c960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144154624 unmapped: 33325056 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 205 heartbeat osd_stat(store_statfs(0x1b9779000/0x0/0x1bfc00000, data 0x21c6182/0x22e5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 205 handle_osd_map epochs [205,206], i have 205, src has [1,206]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144154624 unmapped: 33325056 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 33136640 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 33136640 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1767248 data_alloc: 218103808 data_used: 19349504
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144343040 unmapped: 33136640 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 ms_handle_reset con 0x556c77e60400 session 0x556c774e6b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b9763000/0x0/0x1bfc00000, data 0x21d9cc1/0x22fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155582464 unmapped: 21897216 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 ms_handle_reset con 0x556c74836c00 session 0x556c77dc21e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.484827042s of 10.010388374s, submitted: 48
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 ms_handle_reset con 0x556c77842000 session 0x556c74321e00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145686528 unmapped: 31793152 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 ms_handle_reset con 0x556c77843400 session 0x556c75536f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 ms_handle_reset con 0x556c77b99000 session 0x556c79656780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1842885 data_alloc: 218103808 data_used: 19349504
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144908288 unmapped: 32571392 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b8d7e000/0x0/0x1bfc00000, data 0x2bbed23/0x2ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 144908288 unmapped: 32571392 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 ms_handle_reset con 0x556c77843000 session 0x556c773b8f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1712796 data_alloc: 218103808 data_used: 14106624
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1712796 data_alloc: 218103808 data_used: 14106624
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1712796 data_alloc: 218103808 data_used: 14106624
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1712796 data_alloc: 218103808 data_used: 14106624
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 16K writes, 60K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.01 MB/s#012Cumulative WAL: 16K writes, 5236 syncs, 3.24 writes per sync, written: 0.04 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3738 writes, 9852 keys, 3738 commit groups, 1.0 writes per commit group, ingest: 6.18 MB, 0.01 MB/s#012Interval WAL: 3738 writes, 1601 syncs, 2.33 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1712796 data_alloc: 218103808 data_used: 14106624
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 34750464 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 34742272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1712796 data_alloc: 218103808 data_used: 14106624
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 34742272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b99a3000/0x0/0x1bfc00000, data 0x1f9acb1/0x20ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 34742272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 34742272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142737408 unmapped: 34742272 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.812602997s of 35.863830566s, submitted: 24
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 ms_handle_reset con 0x556c74836c00 session 0x556c796563c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142745600 unmapped: 34734080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 ms_handle_reset con 0x556c77842000 session 0x556c77b39860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1781671 data_alloc: 218103808 data_used: 14106624
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142745600 unmapped: 34734080 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7ef9000/0x0/0x1bfc00000, data 0x28a4d13/0x29c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7ef9000/0x0/0x1bfc00000, data 0x28a4d13/0x29c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 34725888 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 142753792 unmapped: 34725888 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7ef9000/0x0/0x1bfc00000, data 0x28a4d13/0x29c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1854791 data_alloc: 234881024 data_used: 24436736
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7ef9000/0x0/0x1bfc00000, data 0x28a4d13/0x29c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1854791 data_alloc: 234881024 data_used: 24436736
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.450545311s of 13.475263596s, submitted: 24
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 ms_handle_reset con 0x556c77b99000 session 0x556c77458960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 145571840 unmapped: 31907840 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 29638656 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7ef8000/0x0/0x1bfc00000, data 0x28a4d36/0x29c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,2])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1939906 data_alloc: 234881024 data_used: 33927168
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158507008 unmapped: 18972672 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157237248 unmapped: 20242432 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159793152 unmapped: 17686528 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7a54000/0x0/0x1bfc00000, data 0x2d3ad36/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159793152 unmapped: 17686528 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159793152 unmapped: 17686528 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7a54000/0x0/0x1bfc00000, data 0x2d3ad36/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1981154 data_alloc: 234881024 data_used: 34676736
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7a54000/0x0/0x1bfc00000, data 0x2d3ad36/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159793152 unmapped: 17686528 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159793152 unmapped: 17686528 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159793152 unmapped: 17686528 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159793152 unmapped: 17686528 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7a54000/0x0/0x1bfc00000, data 0x2d3ad36/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7a54000/0x0/0x1bfc00000, data 0x2d3ad36/0x2e5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159793152 unmapped: 17686528 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1981154 data_alloc: 234881024 data_used: 34676736
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.774983406s of 13.594219208s, submitted: 69
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159768576 unmapped: 17711104 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 161808384 unmapped: 15671296 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7222000/0x0/0x1bfc00000, data 0x357ad36/0x369c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162136064 unmapped: 15343616 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7215000/0x0/0x1bfc00000, data 0x3587d36/0x36a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162250752 unmapped: 15228928 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7215000/0x0/0x1bfc00000, data 0x3587d36/0x36a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162250752 unmapped: 15228928 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2057802 data_alloc: 234881024 data_used: 35672064
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162250752 unmapped: 15228928 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7215000/0x0/0x1bfc00000, data 0x3587d36/0x36a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162250752 unmapped: 15228928 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162250752 unmapped: 15228928 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162250752 unmapped: 15228928 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162250752 unmapped: 15228928 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2057802 data_alloc: 234881024 data_used: 35672064
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7215000/0x0/0x1bfc00000, data 0x3587d36/0x36a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 15220736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 15220736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 15220736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7215000/0x0/0x1bfc00000, data 0x3587d36/0x36a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 15220736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 15220736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2059722 data_alloc: 234881024 data_used: 35729408
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 15220736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 15220736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.753448486s of 16.880683899s, submitted: 57
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 ms_handle_reset con 0x556c77e60400 session 0x556c796561e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 15220736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 heartbeat osd_stat(store_statfs(0x1b7215000/0x0/0x1bfc00000, data 0x3587d36/0x36a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 15220736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162258944 unmapped: 15220736 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2055962 data_alloc: 234881024 data_used: 35733504
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 207 ms_handle_reset con 0x556c77e60800 session 0x556c74320000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162299904 unmapped: 15179776 heap: 177479680 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 207 ms_handle_reset con 0x556c77842000 session 0x556c773125a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 207 ms_handle_reset con 0x556c74836c00 session 0x556c75a75680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 207 handle_osd_map epochs [207,208], i have 207, src has [1,208]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 207 handle_osd_map epochs [208,208], i have 208, src has [1,208]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 181092352 unmapped: 6512640 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 208 ms_handle_reset con 0x556c77b99000 session 0x556c77313680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 208 ms_handle_reset con 0x556c77b98400 session 0x556c75b2eb40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 171819008 unmapped: 15785984 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 209 ms_handle_reset con 0x556c77e60400 session 0x556c772e2b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 209 heartbeat osd_stat(store_statfs(0x1b6700000/0x0/0x1bfc00000, data 0x40973e7/0x41bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 171884544 unmapped: 15720448 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 210 ms_handle_reset con 0x556c74836c00 session 0x556c74cf8780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 171958272 unmapped: 15646720 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2192360 data_alloc: 251658240 data_used: 43405312
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 15581184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 210 ms_handle_reset con 0x556c77842000 session 0x556c77dc2000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 210 heartbeat osd_stat(store_statfs(0x1b66fc000/0x0/0x1bfc00000, data 0x409905c/0x41bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 210 ms_handle_reset con 0x556c77b98400 session 0x556c773b9c20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 210 ms_handle_reset con 0x556c77b99000 session 0x556c755361e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169607168 unmapped: 17997824 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169607168 unmapped: 17997824 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169607168 unmapped: 17997824 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169607168 unmapped: 17997824 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2183256 data_alloc: 251658240 data_used: 43409408
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169607168 unmapped: 17997824 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 210 heartbeat osd_stat(store_statfs(0x1b66ff000/0x0/0x1bfc00000, data 0x409905c/0x41bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.585074425s of 14.885351181s, submitted: 62
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169615360 unmapped: 17989632 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169615360 unmapped: 17989632 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169615360 unmapped: 17989632 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169639936 unmapped: 17965056 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2187430 data_alloc: 251658240 data_used: 43417600
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169639936 unmapped: 17965056 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b66fb000/0x0/0x1bfc00000, data 0x409ab9b/0x41c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169639936 unmapped: 17965056 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169656320 unmapped: 17948672 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b66fb000/0x0/0x1bfc00000, data 0x409ab9b/0x41c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169762816 unmapped: 17842176 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169762816 unmapped: 17842176 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2205466 data_alloc: 251658240 data_used: 43413504
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b66e2000/0x0/0x1bfc00000, data 0x40e1b9b/0x41dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17793024 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17793024 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17793024 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17793024 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.607973099s of 12.918299675s, submitted: 25
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169811968 unmapped: 17793024 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b66e2000/0x0/0x1bfc00000, data 0x40e1b9b/0x41dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1974276 data_alloc: 234881024 data_used: 25235456
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 ms_handle_reset con 0x556c77b99400 session 0x556c75a9d2c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b7839000/0x0/0x1bfc00000, data 0x2f8ab16/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1972780 data_alloc: 234881024 data_used: 25231360
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b7839000/0x0/0x1bfc00000, data 0x2f8ab16/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b7839000/0x0/0x1bfc00000, data 0x2f8ab16/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b7839000/0x0/0x1bfc00000, data 0x2f8ab16/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1972780 data_alloc: 234881024 data_used: 25231360
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b7839000/0x0/0x1bfc00000, data 0x2f8ab16/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158711808 unmapped: 28893184 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 ms_handle_reset con 0x556c74836c00 session 0x556c7565b680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 28884992 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 ms_handle_reset con 0x556c77b98400 session 0x556c77447860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 28884992 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 heartbeat osd_stat(store_statfs(0x1b7839000/0x0/0x1bfc00000, data 0x2f8ab16/0x3083000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.370114326s of 14.737718582s, submitted: 29
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158720000 unmapped: 28884992 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 ms_handle_reset con 0x556c77842000 session 0x556c77e7bc20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 ms_handle_reset con 0x556c77b99000 session 0x556c773b7e00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1972444 data_alloc: 234881024 data_used: 25227264
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 ms_handle_reset con 0x556c77b98c00 session 0x556c774eba40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 ms_handle_reset con 0x556c74836c00 session 0x556c77a99860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 ms_handle_reset con 0x556c77842000 session 0x556c78286b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157990912 unmapped: 29614080 heap: 187604992 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175603712 unmapped: 16203776 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 212 ms_handle_reset con 0x556c77b98400 session 0x556c753b54a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 168624128 unmapped: 23183360 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 168624128 unmapped: 23183360 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 212 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 213 ms_handle_reset con 0x556c77b99000 session 0x556c773b72c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 168697856 unmapped: 23109632 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2113447 data_alloc: 234881024 data_used: 36155392
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 213 heartbeat osd_stat(store_statfs(0x1b6a7c000/0x0/0x1bfc00000, data 0x3d1a7c3/0x3e41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 213 ms_handle_reset con 0x556c77843400 session 0x556c77458b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165470208 unmapped: 26337280 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 213 ms_handle_reset con 0x556c74836c00 session 0x556c77e0af00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165502976 unmapped: 26304512 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165593088 unmapped: 26214400 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 214 ms_handle_reset con 0x556c77b98400 session 0x556c77403c20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160980992 unmapped: 30826496 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.881920815s of 10.055958748s, submitted: 443
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 214 ms_handle_reset con 0x556c77b99000 session 0x556c75b2fc20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154796032 unmapped: 37011456 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1840749 data_alloc: 218103808 data_used: 19070976
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154796032 unmapped: 37011456 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 214 heartbeat osd_stat(store_statfs(0x1b7fe4000/0x0/0x1bfc00000, data 0x23a10f3/0x24c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154804224 unmapped: 37003264 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 153870336 unmapped: 37937152 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 153870336 unmapped: 37937152 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 215 heartbeat osd_stat(store_statfs(0x1b7fe1000/0x0/0x1bfc00000, data 0x23a2c86/0x24cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 153870336 unmapped: 37937152 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1844747 data_alloc: 218103808 data_used: 19079168
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 153870336 unmapped: 37937152 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 153870336 unmapped: 37937152 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 153870336 unmapped: 37937152 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7fe1000/0x0/0x1bfc00000, data 0x23a2c86/0x24cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 37855232 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 37740544 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1861673 data_alloc: 218103808 data_used: 20680704
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 37740544 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7fde000/0x0/0x1bfc00000, data 0x23a47c5/0x24cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 37740544 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c77312780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77808400 session 0x556c79656f00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 37740544 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 37740544 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.127435684s of 15.339743614s, submitted: 53
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c77459c20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 37740544 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1863821 data_alloc: 218103808 data_used: 20688896
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 37740544 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 37740544 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7fdd000/0x0/0x1bfc00000, data 0x23a47d5/0x24d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154066944 unmapped: 37740544 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c74320780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c77dc2780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154075136 unmapped: 37732352 heap: 191807488 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c77dc3680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809400 session 0x556c774ebe00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154345472 unmapped: 41132032 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1942049 data_alloc: 218103808 data_used: 20692992
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154353664 unmapped: 41123840 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154353664 unmapped: 41123840 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b757b000/0x0/0x1bfc00000, data 0x2e077d5/0x2f33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c774e6000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c774e65a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c774e74a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c774e61e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c774bb400 session 0x556c7565ad20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c75c712c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c75c70b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c75c71a40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c77e7be00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154828800 unmapped: 40648704 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154845184 unmapped: 40632320 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154845184 unmapped: 40632320 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b70ad000/0x0/0x1bfc00000, data 0x32d3847/0x3401000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1985262 data_alloc: 218103808 data_used: 20688896
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154845184 unmapped: 40632320 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154845184 unmapped: 40632320 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154845184 unmapped: 40632320 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154853376 unmapped: 40624128 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b70ad000/0x0/0x1bfc00000, data 0x32d3847/0x3401000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77e65000 session 0x556c77aef0e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.575284958s of 14.745795250s, submitted: 44
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c755afa40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154853376 unmapped: 40624128 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c773b9680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1912006 data_alloc: 218103808 data_used: 20684800
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7aed000/0x0/0x1bfc00000, data 0x2894837/0x29c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155000832 unmapped: 40476672 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155172864 unmapped: 40304640 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155172864 unmapped: 40304640 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7aed000/0x0/0x1bfc00000, data 0x2894837/0x29c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77842000 session 0x556c77a98780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155172864 unmapped: 40304640 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77e64c00 session 0x556c774eb2c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 151912448 unmapped: 43565056 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1785802 data_alloc: 218103808 data_used: 17326080
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 151912448 unmapped: 43565056 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 151912448 unmapped: 43565056 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 151912448 unmapped: 43565056 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 151912448 unmapped: 43565056 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b88c2000/0x0/0x1bfc00000, data 0x1abf837/0x1bec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 151912448 unmapped: 43565056 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1785802 data_alloc: 218103808 data_used: 17326080
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 151912448 unmapped: 43565056 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 151912448 unmapped: 43565056 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 151912448 unmapped: 43565056 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.830281258s of 13.874039650s, submitted: 18
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b88c2000/0x0/0x1bfc00000, data 0x1abf837/0x1bec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156925952 unmapped: 38551552 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159391744 unmapped: 36085760 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1857872 data_alloc: 218103808 data_used: 18284544
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159391744 unmapped: 36085760 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159391744 unmapped: 36085760 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7133000/0x0/0x1bfc00000, data 0x20a5837/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159391744 unmapped: 36085760 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7133000/0x0/0x1bfc00000, data 0x20a5837/0x21d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159391744 unmapped: 36085760 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159391744 unmapped: 36085760 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1858032 data_alloc: 218103808 data_used: 18288640
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c78a9f000 session 0x556c77459680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 37937152 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c753b4000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 37937152 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 37937152 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7139000/0x0/0x1bfc00000, data 0x20a8837/0x21d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.408258438s of 10.607426643s, submitted: 83
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c77aee5a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157548544 unmapped: 37928960 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157548544 unmapped: 37928960 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1854134 data_alloc: 218103808 data_used: 18300928
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157548544 unmapped: 37928960 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157548544 unmapped: 37928960 heap: 195477504 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77842000 session 0x556c77c04960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7138000/0x0/0x1bfc00000, data 0x20a8899/0x21d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,6,4])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77e64c00 session 0x556c7565ba40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b64dd000/0x0/0x1bfc00000, data 0x2d04837/0x2e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b64dd000/0x0/0x1bfc00000, data 0x2d04837/0x2e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1944515 data_alloc: 218103808 data_used: 18300928
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b64dd000/0x0/0x1bfc00000, data 0x2d04837/0x2e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b64dd000/0x0/0x1bfc00000, data 0x2d04837/0x2e31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.702598572s of 11.033026695s, submitted: 19
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c78a9e400 session 0x556c7565a5a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1855797 data_alloc: 218103808 data_used: 18300928
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7139000/0x0/0x1bfc00000, data 0x20a8837/0x21d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c773b8960
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c79657680
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1855885 data_alloc: 218103808 data_used: 18305024
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7139000/0x0/0x1bfc00000, data 0x20a8837/0x21d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157589504 unmapped: 41566208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1855885 data_alloc: 218103808 data_used: 18305024
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.546453476s of 10.561464310s, submitted: 7
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157597696 unmapped: 41558016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7139000/0x0/0x1bfc00000, data 0x20a8837/0x21d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157605888 unmapped: 41549824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7139000/0x0/0x1bfc00000, data 0x20a8837/0x21d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157605888 unmapped: 41549824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157605888 unmapped: 41549824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157605888 unmapped: 41549824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1856133 data_alloc: 218103808 data_used: 18305024
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157605888 unmapped: 41549824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7139000/0x0/0x1bfc00000, data 0x20a8837/0x21d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157605888 unmapped: 41549824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7139000/0x0/0x1bfc00000, data 0x20a8837/0x21d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157605888 unmapped: 41549824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157605888 unmapped: 41549824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157605888 unmapped: 41549824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1856133 data_alloc: 218103808 data_used: 18305024
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157614080 unmapped: 41541632 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157614080 unmapped: 41541632 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7139000/0x0/0x1bfc00000, data 0x20a8837/0x21d5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.579452515s of 12.587181091s, submitted: 2
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c741114a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c77e0a780
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157614080 unmapped: 41541632 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157630464 unmapped: 41525248 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77842000 session 0x556c77e6af00
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155254784 unmapped: 43900928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1727021 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155254784 unmapped: 43900928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155254784 unmapped: 43900928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c13000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155254784 unmapped: 43900928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c75a66b40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1791763 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b73f2000/0x0/0x1bfc00000, data 0x1df17c5/0x1f1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1791763 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b73f2000/0x0/0x1bfc00000, data 0x1df17c5/0x1f1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b73f2000/0x0/0x1bfc00000, data 0x1df17c5/0x1f1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b73f2000/0x0/0x1bfc00000, data 0x1df17c5/0x1f1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b73f2000/0x0/0x1bfc00000, data 0x1df17c5/0x1f1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155680768 unmapped: 43474944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.207145691s of 17.193283081s, submitted: 35
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c77e7ab40
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1795648 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155836416 unmapped: 43319296 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155852800 unmapped: 43302912 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b73ce000/0x0/0x1bfc00000, data 0x1e157c5/0x1f40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1855940 data_alloc: 218103808 data_used: 21938176
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b73ce000/0x0/0x1bfc00000, data 0x1e157c5/0x1f40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1855940 data_alloc: 218103808 data_used: 21938176
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b73ce000/0x0/0x1bfc00000, data 0x1e157c5/0x1f40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b73ce000/0x0/0x1bfc00000, data 0x1e157c5/0x1f40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158556160 unmapped: 40599552 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.062322617s of 13.086327553s, submitted: 7
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163045376 unmapped: 36110336 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6c6a000/0x0/0x1bfc00000, data 0x25797c5/0x26a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163291136 unmapped: 35864576 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1928432 data_alloc: 218103808 data_used: 23089152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164356096 unmapped: 34799616 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164356096 unmapped: 34799616 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164356096 unmapped: 34799616 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164356096 unmapped: 34799616 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6c41000/0x0/0x1bfc00000, data 0x25a27c5/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164356096 unmapped: 34799616 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1928432 data_alloc: 218103808 data_used: 23089152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6c41000/0x0/0x1bfc00000, data 0x25a27c5/0x26cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6c3e000/0x0/0x1bfc00000, data 0x25a57c5/0x26d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1922720 data_alloc: 218103808 data_used: 23089152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6c3e000/0x0/0x1bfc00000, data 0x25a57c5/0x26d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1922720 data_alloc: 218103808 data_used: 23089152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162578432 unmapped: 36577280 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 36569088 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6c3e000/0x0/0x1bfc00000, data 0x25a57c5/0x26d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 36569088 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 36569088 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1922720 data_alloc: 218103808 data_used: 23089152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 36569088 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 36569088 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6c3e000/0x0/0x1bfc00000, data 0x25a57c5/0x26d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 36569088 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 36569088 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 36569088 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6c3e000/0x0/0x1bfc00000, data 0x25a57c5/0x26d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1922720 data_alloc: 218103808 data_used: 23089152
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c773134a0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.950162888s of 27.550039291s, submitted: 74
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c75b2e1e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162586624 unmapped: 36569088 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6c3e000/0x0/0x1bfc00000, data 0x25a57c5/0x26d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6c3e000/0x0/0x1bfc00000, data 0x25a57c5/0x26d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156270592 unmapped: 42885120 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77e64c00 session 0x556c772e2000
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c77e7a1e0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c774032c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c774e63c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c75a9d2c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156278784 unmapped: 42876928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156286976 unmapped: 42868736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156286976 unmapped: 42868736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156286976 unmapped: 42868736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156286976 unmapped: 42868736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156286976 unmapped: 42868736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156286976 unmapped: 42868736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 42860544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 42860544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 42860544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 42860544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 42860544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 42860544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 42860544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 42860544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 42860544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156295168 unmapped: 42860544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 42852352 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 42852352 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 42852352 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 42852352 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 42852352 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 42852352 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 42852352 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 42852352 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 42852352 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 42852352 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: mgrc handle_mgr_map Got map version 12
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2716354406,v1:192.168.122.100:6801/2716354406]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156311552 unmapped: 42844160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 42835968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 42835968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 42835968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 42835968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 42835968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1735606 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 42835968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156319744 unmapped: 42835968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 95.040405273s of 97.337112427s, submitted: 25
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c78a9e000 session 0x556c77b392c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156721152 unmapped: 42434560 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156721152 unmapped: 42434560 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156721152 unmapped: 42434560 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1776876 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156721152 unmapped: 42434560 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b76bc000/0x0/0x1bfc00000, data 0x1b277c5/0x1c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156721152 unmapped: 42434560 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 40361984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158826496 unmapped: 40329216 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: mgrc handle_mgr_map Got map version 13
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2716354406,v1:192.168.122.100:6801/2716354406]
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 40361984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1814316 data_alloc: 218103808 data_used: 18812928
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 40361984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b76bc000/0x0/0x1bfc00000, data 0x1b277c5/0x1c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 40361984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 40361984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 40361984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 40361984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1814316 data_alloc: 218103808 data_used: 18812928
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 40361984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b76bc000/0x0/0x1bfc00000, data 0x1b277c5/0x1c52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 40361984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158793728 unmapped: 40361984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.156377792s of 16.204078674s, submitted: 6
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159768576 unmapped: 39387136 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b73d7000/0x0/0x1bfc00000, data 0x1e0c7c5/0x1f37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 159768576 unmapped: 39387136 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1836950 data_alloc: 218103808 data_used: 18980864
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841618 data_alloc: 218103808 data_used: 18972672
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b735d000/0x0/0x1bfc00000, data 0x1e867c5/0x1fb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1841222 data_alloc: 218103808 data_used: 18972672
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b733c000/0x0/0x1bfc00000, data 0x1ea77c5/0x1fd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 38846464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.506683350s of 13.633295059s, submitted: 38
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160317440 unmapped: 38838272 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160374784 unmapped: 38780928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160374784 unmapped: 38780928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1843138 data_alloc: 218103808 data_used: 18972672
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c77e7ad20
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7329000/0x0/0x1bfc00000, data 0x1eba7c5/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160374784 unmapped: 38780928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160374784 unmapped: 38780928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160374784 unmapped: 38780928 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1844418 data_alloc: 218103808 data_used: 19136512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7329000/0x0/0x1bfc00000, data 0x1eba7c5/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7329000/0x0/0x1bfc00000, data 0x1eba7c5/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1844418 data_alloc: 218103808 data_used: 19136512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7329000/0x0/0x1bfc00000, data 0x1eba7c5/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7329000/0x0/0x1bfc00000, data 0x1eba7c5/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.972942352s of 16.995885849s, submitted: 8
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7329000/0x0/0x1bfc00000, data 0x1eba7c5/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1847026 data_alloc: 218103808 data_used: 19337216
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7329000/0x0/0x1bfc00000, data 0x1eba7c5/0x1fe5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1847026 data_alloc: 218103808 data_used: 19337216
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160382976 unmapped: 38772736 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c77459860
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.760335922s of 10.768692017s, submitted: 2
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c773123c0
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1739750 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1739750 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1739750 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1739750 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:17 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1739750 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1739750 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1739750 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154689536 unmapped: 44466176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c774470e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c779ed800 session 0x556c75a9c780
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c774e6000
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c773b6780
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.572660446s of 31.591220856s, submitted: 6
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c77e0a3c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c75b2eb40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c779ed400 session 0x556c773b8f00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c773b9680
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c773b9c20
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154886144 unmapped: 44269568 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154886144 unmapped: 44269568 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154886144 unmapped: 44269568 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154886144 unmapped: 44269568 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7568000/0x0/0x1bfc00000, data 0x1c79837/0x1da6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1798343 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154886144 unmapped: 44269568 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154886144 unmapped: 44269568 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c741114a0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154894336 unmapped: 44261376 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c74111860
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154894336 unmapped: 44261376 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7568000/0x0/0x1bfc00000, data 0x1c79837/0x1da6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c779ec400 session 0x556c77e7be00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c77e7a1e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 154902528 unmapped: 44253184 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1802051 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7566000/0x0/0x1bfc00000, data 0x1c7986a/0x1da8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155017216 unmapped: 44138496 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 44122112 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 44122112 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 44122112 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 44122112 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1844103 data_alloc: 218103808 data_used: 19369984
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7566000/0x0/0x1bfc00000, data 0x1c7986a/0x1da8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 44122112 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 44122112 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 44122112 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 44122112 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 44122112 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1844103 data_alloc: 218103808 data_used: 19369984
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7566000/0x0/0x1bfc00000, data 0x1c7986a/0x1da8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 155033600 unmapped: 44122112 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.396913528s of 20.521284103s, submitted: 42
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157319168 unmapped: 41836544 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 157401088 unmapped: 41754624 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 40402944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 40402944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1892105 data_alloc: 218103808 data_used: 19705856
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b706a000/0x0/0x1bfc00000, data 0x216c86a/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 40402944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b706a000/0x0/0x1bfc00000, data 0x216c86a/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 40402944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 40402944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 40402944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 40402944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1892105 data_alloc: 218103808 data_used: 19705856
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b706a000/0x0/0x1bfc00000, data 0x216c86a/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158752768 unmapped: 40402944 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7073000/0x0/0x1bfc00000, data 0x216c86a/0x229b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158760960 unmapped: 40394752 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158760960 unmapped: 40394752 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.554852486s of 11.747763634s, submitted: 89
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c7565a960
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c75b2fe00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 158760960 unmapped: 40394752 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c77aef0e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c773a8800 session 0x556c774592c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c74bfda40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c77e6b2c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156172288 unmapped: 42983424 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1752852 data_alloc: 218103808 data_used: 13504512
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c7565b860
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 156516352 unmapped: 42639360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c774594a0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c12000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c773a8400 session 0x556c77312780
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 38797312 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c747d5680
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c773b70e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c74320f00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c772e3680
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74f17c00 session 0x556c75a76780
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c74836c00 session 0x556c75a76f00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160366592 unmapped: 38789120 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77809c00 session 0x556c74bfc960
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160366592 unmapped: 38789120 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b98400 session 0x556c77e6af00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77b99000 session 0x556c755ae3c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160358400 unmapped: 38797312 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1810418 data_alloc: 234881024 data_used: 18886656
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b75f1000/0x0/0x1bfc00000, data 0x1bf07e5/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160161792 unmapped: 38993920 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 38699008 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b75f1000/0x0/0x1bfc00000, data 0x1bf07e5/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b75f1000/0x0/0x1bfc00000, data 0x1bf07e5/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 38699008 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 38699008 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 38699008 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1847378 data_alloc: 234881024 data_used: 24170496
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 38699008 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 38699008 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b75f1000/0x0/0x1bfc00000, data 0x1bf07e5/0x1d1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 38699008 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 38699008 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 38699008 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1847378 data_alloc: 234881024 data_used: 24170496
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160456704 unmapped: 38699008 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.174726486s of 18.364171982s, submitted: 66
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163708928 unmapped: 35446784 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b725b000/0x0/0x1bfc00000, data 0x1f867e5/0x20b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163708928 unmapped: 35446784 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35438592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35438592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1909162 data_alloc: 234881024 data_used: 24936448
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35438592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6fab000/0x0/0x1bfc00000, data 0x22357e5/0x2362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35438592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35438592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6fab000/0x0/0x1bfc00000, data 0x22357e5/0x2362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b6fab000/0x0/0x1bfc00000, data 0x22357e5/0x2362000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35438592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35438592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1909162 data_alloc: 234881024 data_used: 24936448
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35438592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77f8ec00 session 0x556c7565a5a0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77f8e800 session 0x556c753b4780
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.914326668s of 10.274970055s, submitted: 41
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 163717120 unmapped: 35438592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77842000 session 0x556c75b2ef00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1765189 data_alloc: 234881024 data_used: 18878464
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1765189 data_alloc: 234881024 data_used: 18878464
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1765189 data_alloc: 234881024 data_used: 18878464
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1765189 data_alloc: 234881024 data_used: 18878464
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1765189 data_alloc: 234881024 data_used: 18878464
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1765189 data_alloc: 234881024 data_used: 18878464
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1765189 data_alloc: 234881024 data_used: 18878464
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77843400 session 0x556c755ae5a0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77f8e000 session 0x556c77458b40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77f8e400 session 0x556c79657e00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77842000 session 0x556c77e7a3c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 38928384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.191795349s of 35.288177490s, submitted: 25
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77843400 session 0x556c772e2f00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7c14000/0x0/0x1bfc00000, data 0x15cf7c5/0x16fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 38871040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 38871040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7bf6000/0x0/0x1bfc00000, data 0x15ed7c5/0x1718000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 38871040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1768057 data_alloc: 234881024 data_used: 18878464
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 38871040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 38871040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 38871040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7bf6000/0x0/0x1bfc00000, data 0x15ed7c5/0x1718000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7bf6000/0x0/0x1bfc00000, data 0x15ed7c5/0x1718000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 38871040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160292864 unmapped: 38862848 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1768377 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160292864 unmapped: 38862848 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160292864 unmapped: 38862848 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7bf6000/0x0/0x1bfc00000, data 0x15ed7c5/0x1718000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160292864 unmapped: 38862848 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7bf6000/0x0/0x1bfc00000, data 0x15ed7c5/0x1718000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160292864 unmapped: 38862848 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160292864 unmapped: 38862848 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1768377 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160301056 unmapped: 38854656 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160301056 unmapped: 38854656 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7bf6000/0x0/0x1bfc00000, data 0x15ed7c5/0x1718000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160301056 unmapped: 38854656 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7bf6000/0x0/0x1bfc00000, data 0x15ed7c5/0x1718000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160301056 unmapped: 38854656 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 160301056 unmapped: 38854656 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1768537 data_alloc: 234881024 data_used: 18915328
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.196809769s of 18.203269958s, submitted: 1
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b78b2000/0x0/0x1bfc00000, data 0x19317c5/0x1a5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162775040 unmapped: 36380672 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7812000/0x0/0x1bfc00000, data 0x19c97c5/0x1af4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162881536 unmapped: 36274176 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162897920 unmapped: 36257792 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7803000/0x0/0x1bfc00000, data 0x19d87c5/0x1b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 36249600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162914304 unmapped: 36241408 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812719 data_alloc: 234881024 data_used: 19259392
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7803000/0x0/0x1bfc00000, data 0x19d87c5/0x1b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162914304 unmapped: 36241408 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7803000/0x0/0x1bfc00000, data 0x19d87c5/0x1b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162922496 unmapped: 36233216 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162922496 unmapped: 36233216 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7803000/0x0/0x1bfc00000, data 0x19d87c5/0x1b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162922496 unmapped: 36233216 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162930688 unmapped: 36225024 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812735 data_alloc: 234881024 data_used: 19259392
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162930688 unmapped: 36225024 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162930688 unmapped: 36225024 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7803000/0x0/0x1bfc00000, data 0x19d87c5/0x1b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162930688 unmapped: 36225024 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7803000/0x0/0x1bfc00000, data 0x19d87c5/0x1b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812735 data_alloc: 234881024 data_used: 19259392
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7803000/0x0/0x1bfc00000, data 0x19d87c5/0x1b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 ms_handle_reset con 0x556c77f8e000 session 0x556c77e6ad20
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7803000/0x0/0x1bfc00000, data 0x19d87c5/0x1b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812735 data_alloc: 234881024 data_used: 19259392
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7803000/0x0/0x1bfc00000, data 0x19d87c5/0x1b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162938880 unmapped: 36216832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b7803000/0x0/0x1bfc00000, data 0x19d87c5/0x1b03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 216 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.285184860s of 24.425603867s, submitted: 37
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162316288 unmapped: 36839424 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812525 data_alloc: 234881024 data_used: 19267584
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162316288 unmapped: 36839424 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b7807000/0x0/0x1bfc00000, data 0x19da41e/0x1b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162316288 unmapped: 36839424 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162316288 unmapped: 36839424 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162324480 unmapped: 36831232 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b7807000/0x0/0x1bfc00000, data 0x19da41e/0x1b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162324480 unmapped: 36831232 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812845 data_alloc: 234881024 data_used: 19279872
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162324480 unmapped: 36831232 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162324480 unmapped: 36831232 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b7807000/0x0/0x1bfc00000, data 0x19da41e/0x1b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162324480 unmapped: 36831232 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b7807000/0x0/0x1bfc00000, data 0x19da41e/0x1b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162324480 unmapped: 36831232 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b7807000/0x0/0x1bfc00000, data 0x19da41e/0x1b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162324480 unmapped: 36831232 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812845 data_alloc: 234881024 data_used: 19279872
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.018420219s of 11.180746078s, submitted: 2
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162332672 unmapped: 36823040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 218 heartbeat osd_stat(store_statfs(0x1b7807000/0x0/0x1bfc00000, data 0x19da41e/0x1b06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162332672 unmapped: 36823040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162332672 unmapped: 36823040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 218 heartbeat osd_stat(store_statfs(0x1b7804000/0x0/0x1bfc00000, data 0x19dc0cb/0x1b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 218 heartbeat osd_stat(store_statfs(0x1b7805000/0x0/0x1bfc00000, data 0x19dc0cb/0x1b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162332672 unmapped: 36823040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 218 heartbeat osd_stat(store_statfs(0x1b7805000/0x0/0x1bfc00000, data 0x19dc0cb/0x1b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162332672 unmapped: 36823040 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1816571 data_alloc: 234881024 data_used: 19341312
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7805000/0x0/0x1bfc00000, data 0x19dc0cb/0x1b09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162357248 unmapped: 36798464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162357248 unmapped: 36798464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162357248 unmapped: 36798464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162357248 unmapped: 36798464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162455552 unmapped: 36700160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7801000/0x0/0x1bfc00000, data 0x19ddc0a/0x1b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1824937 data_alloc: 234881024 data_used: 19345408
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 19K writes, 69K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s#012Cumulative WAL: 19K writes, 6301 syncs, 3.10 writes per sync, written: 0.05 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2557 writes, 8679 keys, 2557 commit groups, 1.0 writes per commit group, ingest: 9.15 MB, 0.02 MB/s#012Interval WAL: 2557 writes, 1065 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162455552 unmapped: 36700160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7802000/0x0/0x1bfc00000, data 0x19ddc0a/0x1b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162455552 unmapped: 36700160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162463744 unmapped: 36691968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162463744 unmapped: 36691968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77f8e800 session 0x556c773b9e00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162463744 unmapped: 36691968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1824937 data_alloc: 234881024 data_used: 19345408
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.573359489s of 14.631469727s, submitted: 53
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c74836c00 session 0x556c77dc3680
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162463744 unmapped: 36691968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162463744 unmapped: 36691968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162463744 unmapped: 36691968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162463744 unmapped: 36691968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162463744 unmapped: 36691968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1784911 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162463744 unmapped: 36691968 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: mgrc ms_handle_reset ms_handle_reset con 0x556c79836c00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2716354406
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2716354406,v1:192.168.122.100:6801/2716354406]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: mgrc handle_mgr_configure stats_period=5
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c76a8c400 session 0x556c774025a0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1784911 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1784911 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162709504 unmapped: 36446208 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1784911 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1784911 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1784911 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162717696 unmapped: 36438016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1784911 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162725888 unmapped: 36429824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162725888 unmapped: 36429824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162725888 unmapped: 36429824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162725888 unmapped: 36429824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162725888 unmapped: 36429824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1784911 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162725888 unmapped: 36429824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162725888 unmapped: 36429824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77842000 session 0x556c77312b40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77843400 session 0x556c74d623c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77f8e000 session 0x556c755afc20
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77f8e800 session 0x556c774ea3c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162725888 unmapped: 36429824 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 42.798362732s of 42.870792389s, submitted: 8
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c74e27400 session 0x556c75c71a40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77842000 session 0x556c77dc3e00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77843400 session 0x556c773b9a40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77f8e000 session 0x556c774e65a0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77f8e800 session 0x556c753b41e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162889728 unmapped: 36265984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162889728 unmapped: 36265984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1810643 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162889728 unmapped: 36265984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162889728 unmapped: 36265984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162889728 unmapped: 36265984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162889728 unmapped: 36265984 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162897920 unmapped: 36257792 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1810643 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 36249600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 36249600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 36249600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 36249600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 36249600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1810643 data_alloc: 234881024 data_used: 18894848
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 36249600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 36249600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 36249600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 36249600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162955264 unmapped: 36200448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1814963 data_alloc: 234881024 data_used: 19456000
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162955264 unmapped: 36200448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162955264 unmapped: 36200448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162955264 unmapped: 36200448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162955264 unmapped: 36200448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162955264 unmapped: 36200448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1826323 data_alloc: 234881024 data_used: 21069824
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162955264 unmapped: 36200448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162955264 unmapped: 36200448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162955264 unmapped: 36200448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 36192256 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b79da000/0x0/0x1bfc00000, data 0x1804c1a/0x1934000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 162963456 unmapped: 36192256 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1826323 data_alloc: 234881024 data_used: 21069824
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.666198730s of 27.696617126s, submitted: 7
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167157760 unmapped: 31997952 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7344000/0x0/0x1bfc00000, data 0x1e8cc1a/0x1fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 168222720 unmapped: 30932992 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 168599552 unmapped: 30556160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 168599552 unmapped: 30556160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 168599552 unmapped: 30556160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b72fc000/0x0/0x1bfc00000, data 0x1eccc1a/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1910799 data_alloc: 234881024 data_used: 22007808
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 168599552 unmapped: 30556160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b72fc000/0x0/0x1bfc00000, data 0x1eccc1a/0x1ffc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 168599552 unmapped: 30556160 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167075840 unmapped: 32079872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167075840 unmapped: 32079872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167075840 unmapped: 32079872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1904723 data_alloc: 234881024 data_used: 22007808
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167075840 unmapped: 32079872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c75a14800 session 0x556c774e6960
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167075840 unmapped: 32079872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167075840 unmapped: 32079872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 32071680 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 32071680 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1904723 data_alloc: 234881024 data_used: 22007808
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 32071680 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 32071680 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167084032 unmapped: 32071680 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167092224 unmapped: 32063488 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167092224 unmapped: 32063488 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1904723 data_alloc: 234881024 data_used: 22007808
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167092224 unmapped: 32063488 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c76a8d400 session 0x556c774474a0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167116800 unmapped: 32038912 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167124992 unmapped: 32030720 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167124992 unmapped: 32030720 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167124992 unmapped: 32030720 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1906483 data_alloc: 234881024 data_used: 22077440
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 24.970357895s of 25.187677383s, submitted: 126
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167190528 unmapped: 31965184 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167174144 unmapped: 31981568 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167215104 unmapped: 31940608 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [1,0,1])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167395328 unmapped: 31760384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167395328 unmapped: 31760384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1902515 data_alloc: 234881024 data_used: 22093824
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167395328 unmapped: 31760384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167395328 unmapped: 31760384 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167444480 unmapped: 31711232 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167469056 unmapped: 31686656 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167469056 unmapped: 31686656 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1912867 data_alloc: 234881024 data_used: 23130112
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167469056 unmapped: 31686656 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167469056 unmapped: 31686656 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167477248 unmapped: 31678464 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.004063606s of 12.998340607s, submitted: 430
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167485440 unmapped: 31670272 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167485440 unmapped: 31670272 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1910915 data_alloc: 234881024 data_used: 23126016
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167485440 unmapped: 31670272 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167485440 unmapped: 31670272 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b730e000/0x0/0x1bfc00000, data 0x1ed0c1a/0x2000000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 167485440 unmapped: 31670272 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77842000 session 0x556c75537a40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c76a8d400 session 0x556c77ab81e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 34201600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 34201600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 34201600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 34201600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 34201600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 34201600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 34201600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 34201600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164954112 unmapped: 34201600 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164962304 unmapped: 34193408 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164962304 unmapped: 34193408 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164962304 unmapped: 34193408 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164962304 unmapped: 34193408 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164962304 unmapped: 34193408 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164962304 unmapped: 34193408 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164970496 unmapped: 34185216 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164970496 unmapped: 34185216 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164970496 unmapped: 34185216 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164970496 unmapped: 34185216 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164970496 unmapped: 34185216 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164970496 unmapped: 34185216 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164978688 unmapped: 34177024 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164978688 unmapped: 34177024 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164978688 unmapped: 34177024 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164978688 unmapped: 34177024 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164978688 unmapped: 34177024 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164986880 unmapped: 34168832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164986880 unmapped: 34168832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164986880 unmapped: 34168832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164986880 unmapped: 34168832 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164995072 unmapped: 34160640 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164995072 unmapped: 34160640 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164995072 unmapped: 34160640 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 164995072 unmapped: 34160640 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165003264 unmapped: 34152448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165003264 unmapped: 34152448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165003264 unmapped: 34152448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165003264 unmapped: 34152448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165003264 unmapped: 34152448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165003264 unmapped: 34152448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165003264 unmapped: 34152448 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165011456 unmapped: 34144256 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165011456 unmapped: 34144256 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165011456 unmapped: 34144256 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165011456 unmapped: 34144256 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165011456 unmapped: 34144256 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165011456 unmapped: 34144256 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165019648 unmapped: 34136064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165027840 unmapped: 34127872 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165036032 unmapped: 34119680 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165036032 unmapped: 34119680 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b7c0b000/0x0/0x1bfc00000, data 0x15d4c0a/0x1703000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165036032 unmapped: 34119680 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1797401 data_alloc: 234881024 data_used: 18911232
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165036032 unmapped: 34119680 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165036032 unmapped: 34119680 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77f8e000 session 0x556c77459860
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77f8e800 session 0x556c75c712c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c74e25c00 session 0x556c77e0a3c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c76a8d400 session 0x556c77f503c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 99.483657837s of 99.519470215s, submitted: 18
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 171352064 unmapped: 27803648 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77842000 session 0x556c774eba40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77f8e000 session 0x556c74bfd0e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77f8e800 session 0x556c753b4000
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c78c74000 session 0x556c74cf81e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c76a8d400 session 0x556c774463c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165732352 unmapped: 33423360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b73bd000/0x0/0x1bfc00000, data 0x1e20c7c/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165732352 unmapped: 33423360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1870183 data_alloc: 234881024 data_used: 18915328
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165732352 unmapped: 33423360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165732352 unmapped: 33423360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165732352 unmapped: 33423360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b73bd000/0x0/0x1bfc00000, data 0x1e20c7c/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165732352 unmapped: 33423360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165732352 unmapped: 33423360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1870183 data_alloc: 234881024 data_used: 18915328
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165732352 unmapped: 33423360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165732352 unmapped: 33423360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 165748736 unmapped: 33406976 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b73bd000/0x0/0x1bfc00000, data 0x1e20c7c/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b73bd000/0x0/0x1bfc00000, data 0x1e20c7c/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1930795 data_alloc: 234881024 data_used: 27324416
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b73bd000/0x0/0x1bfc00000, data 0x1e20c7c/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1930795 data_alloc: 234881024 data_used: 27324416
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b73bd000/0x0/0x1bfc00000, data 0x1e20c7c/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b73bd000/0x0/0x1bfc00000, data 0x1e20c7c/0x1f51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.593112946s of 21.262189865s, submitted: 44
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 169467904 unmapped: 29687808 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175259648 unmapped: 23896064 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2012717 data_alloc: 234881024 data_used: 27316224
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175652864 unmapped: 23502848 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175923200 unmapped: 23232512 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b64f8000/0x0/0x1bfc00000, data 0x28d5c7c/0x2a06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038755 data_alloc: 251658240 data_used: 28950528
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b64f8000/0x0/0x1bfc00000, data 0x28d5c7c/0x2a06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037223 data_alloc: 251658240 data_used: 28954624
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b64d7000/0x0/0x1bfc00000, data 0x28f6c7c/0x2a27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174948352 unmapped: 24207360 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174956544 unmapped: 24199168 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174956544 unmapped: 24199168 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037223 data_alloc: 251658240 data_used: 28954624
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b64d7000/0x0/0x1bfc00000, data 0x28f6c7c/0x2a27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 24190976 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 24190976 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 24190976 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b64d7000/0x0/0x1bfc00000, data 0x28f6c7c/0x2a27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 24190976 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.611631393s of 20.919385910s, submitted: 129
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 24190976 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2037267 data_alloc: 251658240 data_used: 28954624
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 24190976 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b64d4000/0x0/0x1bfc00000, data 0x28f9c7c/0x2a2a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174972928 unmapped: 24182784 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174981120 unmapped: 24174592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174981120 unmapped: 24174592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 heartbeat osd_stat(store_statfs(0x1b64c1000/0x0/0x1bfc00000, data 0x290bc7c/0x2a3c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 ms_handle_reset con 0x556c77f8e000 session 0x556c774ea3c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174981120 unmapped: 24174592 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2038895 data_alloc: 251658240 data_used: 28987392
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 220 ms_handle_reset con 0x556c77f8e800 session 0x556c77312b40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175005696 unmapped: 24150016 heap: 199155712 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 220 ms_handle_reset con 0x556c78c74000 session 0x556c79657e00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 220 heartbeat osd_stat(store_statfs(0x1b5c70000/0x0/0x1bfc00000, data 0x315b937/0x328e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 220 ms_handle_reset con 0x556c78c75c00 session 0x556c75b2ef00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188760064 unmapped: 12681216 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 220 handle_osd_map epochs [220,221], i have 220, src has [1,221]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 221 ms_handle_reset con 0x556c76a8d400 session 0x556c77e7ad20
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188776448 unmapped: 12664832 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 221 heartbeat osd_stat(store_statfs(0x1b5a3f000/0x0/0x1bfc00000, data 0x338b582/0x34be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 221 handle_osd_map epochs [221,222], i have 221, src has [1,222]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188882944 unmapped: 12558336 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188981248 unmapped: 12460032 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2183695 data_alloc: 251658240 data_used: 39079936
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188981248 unmapped: 12460032 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 222 heartbeat osd_stat(store_statfs(0x1b5a37000/0x0/0x1bfc00000, data 0x33911f7/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188981248 unmapped: 12460032 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188981248 unmapped: 12460032 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188989440 unmapped: 12451840 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 222 handle_osd_map epochs [222,223], i have 222, src has [1,223]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.741188049s of 14.920697212s, submitted: 45
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b5a37000/0x0/0x1bfc00000, data 0x33911f7/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 19365888 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2168717 data_alloc: 251658240 data_used: 39079936
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b5a37000/0x0/0x1bfc00000, data 0x33911f7/0x34c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 19365888 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b5a35000/0x0/0x1bfc00000, data 0x3392d36/0x34c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 ms_handle_reset con 0x556c77f8e000 session 0x556c77312d20
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 ms_handle_reset con 0x556c77f8e800 session 0x556c74d62000
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 ms_handle_reset con 0x556c78c74000 session 0x556c77312f00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 ms_handle_reset con 0x556c78c75c00 session 0x556c74d621e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 19365888 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 19365888 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 19365888 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 19365888 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2169311 data_alloc: 251658240 data_used: 39079936
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182075392 unmapped: 19365888 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b5a36000/0x0/0x1bfc00000, data 0x3392d36/0x34c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182083584 unmapped: 19357696 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 ms_handle_reset con 0x556c76a8d400 session 0x556c774590e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182083584 unmapped: 19357696 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182091776 unmapped: 19349504 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187817984 unmapped: 13623296 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2223392 data_alloc: 268435456 data_used: 46452736
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187826176 unmapped: 13615104 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187826176 unmapped: 13615104 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b5a36000/0x0/0x1bfc00000, data 0x3392d36/0x34c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187826176 unmapped: 13615104 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187834368 unmapped: 13606912 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187834368 unmapped: 13606912 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2223392 data_alloc: 268435456 data_used: 46452736
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187834368 unmapped: 13606912 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b5a36000/0x0/0x1bfc00000, data 0x3392d36/0x34c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187834368 unmapped: 13606912 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187834368 unmapped: 13606912 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187834368 unmapped: 13606912 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b5a36000/0x0/0x1bfc00000, data 0x3392d36/0x34c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 187834368 unmapped: 13606912 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2223392 data_alloc: 268435456 data_used: 46452736
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b5a36000/0x0/0x1bfc00000, data 0x3392d36/0x34c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.753053665s of 20.838165283s, submitted: 22
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188055552 unmapped: 13385728 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 12992512 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 12992512 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 12992512 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 12992512 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2246619 data_alloc: 268435456 data_used: 48144384
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 12976128 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b586e000/0x0/0x1bfc00000, data 0x355ad36/0x3690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2246619 data_alloc: 268435456 data_used: 48144384
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b586e000/0x0/0x1bfc00000, data 0x355ad36/0x3690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2246619 data_alloc: 268435456 data_used: 48144384
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b586e000/0x0/0x1bfc00000, data 0x355ad36/0x3690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 12943360 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 12935168 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 12935168 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2246619 data_alloc: 268435456 data_used: 48144384
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 12935168 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b586e000/0x0/0x1bfc00000, data 0x355ad36/0x3690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 12935168 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b586e000/0x0/0x1bfc00000, data 0x355ad36/0x3690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 12935168 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b586e000/0x0/0x1bfc00000, data 0x355ad36/0x3690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 12935168 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 12935168 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2246619 data_alloc: 268435456 data_used: 48144384
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b586e000/0x0/0x1bfc00000, data 0x355ad36/0x3690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 12935168 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188964864 unmapped: 12476416 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 ms_handle_reset con 0x556c78c74000 session 0x556c74320000
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.277669907s of 27.591644287s, submitted: 2
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 188973056 unmapped: 12468224 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 heartbeat osd_stat(store_statfs(0x1b586e000/0x0/0x1bfc00000, data 0x355ad36/0x3690000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 189079552 unmapped: 12361728 heap: 201441280 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 224 ms_handle_reset con 0x556c78c75400 session 0x556c774e63c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 224 ms_handle_reset con 0x556c78c74400 session 0x556c74d63a40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 224 ms_handle_reset con 0x556c78c75800 session 0x556c7565a5a0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 224 ms_handle_reset con 0x556c76a8d400 session 0x556c774472c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 225 ms_handle_reset con 0x556c78c74000 session 0x556c77dc3e00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 195428352 unmapped: 22822912 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2442362 data_alloc: 268435456 data_used: 51949568
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 198148096 unmapped: 20103168 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 226 ms_handle_reset con 0x556c78c74400 session 0x556c75c712c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 196026368 unmapped: 22224896 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 226 heartbeat osd_stat(store_statfs(0x1b3e33000/0x0/0x1bfc00000, data 0x4f91313/0x50cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 22192128 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 196059136 unmapped: 22192128 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 227 ms_handle_reset con 0x556c78c75400 session 0x556c773b61e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 196100096 unmapped: 22151168 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2301632 data_alloc: 268435456 data_used: 51961856
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 196100096 unmapped: 22151168 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 227 ms_handle_reset con 0x556c77f8e000 session 0x556c774ea960
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 227 ms_handle_reset con 0x556c77f8e800 session 0x556c77e7bc20
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 196124672 unmapped: 22126592 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 227 ms_handle_reset con 0x556c76a8d400 session 0x556c774e74a0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 197189632 unmapped: 21061632 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 227 heartbeat osd_stat(store_statfs(0x1b5a2a000/0x0/0x1bfc00000, data 0x3399f7a/0x34d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 197189632 unmapped: 21061632 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.203119278s of 11.682655334s, submitted: 139
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 197197824 unmapped: 21053440 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2264874 data_alloc: 268435456 data_used: 50102272
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 197197824 unmapped: 21053440 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 228 heartbeat osd_stat(store_statfs(0x1b5a26000/0x0/0x1bfc00000, data 0x339bad5/0x34d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 229 ms_handle_reset con 0x556c78c74000 session 0x556c755361e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 28491776 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 229 ms_handle_reset con 0x556c77842000 session 0x556c755af680
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 189734912 unmapped: 28516352 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 229 heartbeat osd_stat(store_statfs(0x1b64a0000/0x0/0x1bfc00000, data 0x2921782/0x2a5e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [1])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 229 ms_handle_reset con 0x556c78c74400 session 0x556c753b5c20
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175513600 unmapped: 42737664 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175513600 unmapped: 42737664 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1867163 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175513600 unmapped: 42737664 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 230 heartbeat osd_stat(store_statfs(0x1b745b000/0x0/0x1bfc00000, data 0x15e826b/0x1724000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175513600 unmapped: 42737664 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175513600 unmapped: 42737664 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175513600 unmapped: 42737664 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 230 heartbeat osd_stat(store_statfs(0x1b745b000/0x0/0x1bfc00000, data 0x15e826b/0x1724000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 230 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.800604820s of 10.230946541s, submitted: 84
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175521792 unmapped: 42729472 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175521792 unmapped: 42729472 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175521792 unmapped: 42729472 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175521792 unmapped: 42729472 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175521792 unmapped: 42729472 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175521792 unmapped: 42729472 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175521792 unmapped: 42729472 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175521792 unmapped: 42729472 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 26 13:54:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/45974341' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1869961 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b77d6000/0x0/0x1bfc00000, data 0x15e9daa/0x1727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175529984 unmapped: 42721280 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 68.023765564s of 68.033340454s, submitted: 11
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 ms_handle_reset con 0x556c76a8d400 session 0x556c774594a0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 ms_handle_reset con 0x556c77842000 session 0x556c77410000
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175841280 unmapped: 42409984 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175841280 unmapped: 42409984 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6e5e000/0x0/0x1bfc00000, data 0x1f61e0c/0x20a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175841280 unmapped: 42409984 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1943836 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6e5e000/0x0/0x1bfc00000, data 0x1f61e0c/0x20a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175841280 unmapped: 42409984 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6e5e000/0x0/0x1bfc00000, data 0x1f61e0c/0x20a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175841280 unmapped: 42409984 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6e5e000/0x0/0x1bfc00000, data 0x1f61e0c/0x20a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175841280 unmapped: 42409984 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 ms_handle_reset con 0x556c77f8e800 session 0x556c77e7a3c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175841280 unmapped: 42409984 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 175841280 unmapped: 42409984 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1944497 data_alloc: 234881024 data_used: 18964480
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39772160 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6e5d000/0x0/0x1bfc00000, data 0x1f61e2f/0x20a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39772160 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39772160 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39772160 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39772160 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2014577 data_alloc: 251658240 data_used: 28803072
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6e5d000/0x0/0x1bfc00000, data 0x1f61e2f/0x20a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39772160 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39772160 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39772160 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6e5d000/0x0/0x1bfc00000, data 0x1f61e2f/0x20a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39772160 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39772160 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2014577 data_alloc: 251658240 data_used: 28803072
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.247598648s of 18.361377716s, submitted: 32
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 181092352 unmapped: 37158912 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6e5d000/0x0/0x1bfc00000, data 0x1f61e2f/0x20a1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 180592640 unmapped: 37658624 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 181960704 unmapped: 36290560 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182329344 unmapped: 35921920 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 35889152 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2060707 data_alloc: 251658240 data_used: 29544448
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 35889152 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6b5b000/0x0/0x1bfc00000, data 0x225be2f/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 35889152 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 35889152 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 35889152 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 35889152 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2060723 data_alloc: 251658240 data_used: 29544448
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 35889152 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6b5b000/0x0/0x1bfc00000, data 0x225be2f/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 35889152 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6b5b000/0x0/0x1bfc00000, data 0x225be2f/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2060723 data_alloc: 251658240 data_used: 29544448
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6b5b000/0x0/0x1bfc00000, data 0x225be2f/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6b5b000/0x0/0x1bfc00000, data 0x225be2f/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2060723 data_alloc: 251658240 data_used: 29544448
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6b5b000/0x0/0x1bfc00000, data 0x225be2f/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2061043 data_alloc: 251658240 data_used: 29552640
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6b5b000/0x0/0x1bfc00000, data 0x225be2f/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182370304 unmapped: 35880960 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b6b5b000/0x0/0x1bfc00000, data 0x225be2f/0x239b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.188907623s of 28.334054947s, submitted: 68
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 35872768 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182386688 unmapped: 35864576 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2060659 data_alloc: 251658240 data_used: 29556736
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182386688 unmapped: 35864576 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182394880 unmapped: 35856384 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 232 heartbeat osd_stat(store_statfs(0x1b6b5f000/0x0/0x1bfc00000, data 0x225da88/0x239e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182394880 unmapped: 35856384 heap: 218251264 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 233 heartbeat osd_stat(store_statfs(0x1b6b5f000/0x0/0x1bfc00000, data 0x225da88/0x239e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 233 ms_handle_reset con 0x556c78c75400 session 0x556c772e2780
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182575104 unmapped: 39534592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 233 ms_handle_reset con 0x556c78c74c00 session 0x556c77e6a1e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 233 ms_handle_reset con 0x556c76a8d400 session 0x556c79657680
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182583296 unmapped: 39526400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2216304 data_alloc: 251658240 data_used: 31272960
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 235 ms_handle_reset con 0x556c77842000 session 0x556c772e3a40
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182583296 unmapped: 39526400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 235 ms_handle_reset con 0x556c77f8e800 session 0x556c7565b860
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 235 ms_handle_reset con 0x556c78c75400 session 0x556c778a4d20
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 235 heartbeat osd_stat(store_statfs(0x1b5a69000/0x0/0x1bfc00000, data 0x334d0c7/0x3493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 235 ms_handle_reset con 0x556c79837400 session 0x556c773b6f00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 235 heartbeat osd_stat(store_statfs(0x1b5a69000/0x0/0x1bfc00000, data 0x334d0c7/0x3493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182583296 unmapped: 39526400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 235 heartbeat osd_stat(store_statfs(0x1b5a69000/0x0/0x1bfc00000, data 0x334d0c7/0x3493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182697984 unmapped: 39411712 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182697984 unmapped: 39411712 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182697984 unmapped: 39411712 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2221710 data_alloc: 251658240 data_used: 31272960
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 235 heartbeat osd_stat(store_statfs(0x1b5a69000/0x0/0x1bfc00000, data 0x334d0c7/0x3493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 235 heartbeat osd_stat(store_statfs(0x1b5a69000/0x0/0x1bfc00000, data 0x334d0c7/0x3493000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182697984 unmapped: 39411712 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182697984 unmapped: 39411712 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 182697984 unmapped: 39411712 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.761216164s of 15.025369644s, submitted: 79
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 180412416 unmapped: 41697280 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 180412416 unmapped: 41697280 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2216028 data_alloc: 251658240 data_used: 31272960
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 236 ms_handle_reset con 0x556c76a8d400 session 0x556c77313680
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 180412416 unmapped: 41697280 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 236 heartbeat osd_stat(store_statfs(0x1b5a67000/0x0/0x1bfc00000, data 0x334ec06/0x3496000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 180412416 unmapped: 41697280 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 237 ms_handle_reset con 0x556c77842000 session 0x556c79657e00
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 180445184 unmapped: 41664512 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 180445184 unmapped: 41664512 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 237 heartbeat osd_stat(store_statfs(0x1b6b4f000/0x0/0x1bfc00000, data 0x22667ef/0x23ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 180461568 unmapped: 41648128 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 237 ms_handle_reset con 0x556c77f8e800 session 0x556c74d621e0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2096042 data_alloc: 251658240 data_used: 31281152
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 237 heartbeat osd_stat(store_statfs(0x1b6b4f000/0x0/0x1bfc00000, data 0x22667ef/0x23ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 237 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 180461568 unmapped: 41648128 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 180461568 unmapped: 41648128 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 238 ms_handle_reset con 0x556c78c74000 session 0x556c747d4d20
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 238 ms_handle_reset con 0x556c78c75400 session 0x556c74cf92c0
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174366720 unmapped: 47742976 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.846185684s of 10.044912338s, submitted: 89
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1915390 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 239 ms_handle_reset con 0x556c76a8d400 session 0x556c74bfc960
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 239 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f7f8e/0x173f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 239 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f7f8e/0x173f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 239 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f7f8e/0x173f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 239 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f7f8e/0x173f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 239 handle_osd_map epochs [240,240], i have 240, src has [1,240]
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174374912 unmapped: 47734784 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174383104 unmapped: 47726592 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174391296 unmapped: 47718400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174391296 unmapped: 47718400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174391296 unmapped: 47718400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174391296 unmapped: 47718400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174391296 unmapped: 47718400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174391296 unmapped: 47718400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174391296 unmapped: 47718400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174391296 unmapped: 47718400 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 234881024 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 21K writes, 76K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s#012Cumulative WAL: 21K writes, 7356 syncs, 2.98 writes per sync, written: 0.06 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2417 writes, 7263 keys, 2417 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s#012Interval WAL: 2417 writes, 1055 syncs, 2.29 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174399488 unmapped: 47710208 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174407680 unmapped: 47702016 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 47693824 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174424064 unmapped: 47685632 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174432256 unmapped: 47677440 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174440448 unmapped: 47669248 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174448640 unmapped: 47661056 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174448640 unmapped: 47661056 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1918012 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174448640 unmapped: 47661056 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174448640 unmapped: 47661056 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bb000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174448640 unmapped: 47661056 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174448640 unmapped: 47661056 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 221.899017334s of 221.920364380s, submitted: 29
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174497792 unmapped: 47611904 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174473216 unmapped: 47636480 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [2])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174628864 unmapped: 47480832 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174678016 unmapped: 47431680 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174678016 unmapped: 47431680 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174678016 unmapped: 47431680 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174678016 unmapped: 47431680 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174678016 unmapped: 47431680 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174678016 unmapped: 47431680 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174678016 unmapped: 47431680 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174686208 unmapped: 47423488 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174686208 unmapped: 47423488 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174686208 unmapped: 47423488 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174686208 unmapped: 47423488 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174686208 unmapped: 47423488 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174686208 unmapped: 47423488 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174686208 unmapped: 47423488 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174686208 unmapped: 47423488 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174686208 unmapped: 47423488 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174694400 unmapped: 47415296 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174694400 unmapped: 47415296 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174694400 unmapped: 47415296 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174694400 unmapped: 47415296 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174694400 unmapped: 47415296 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174694400 unmapped: 47415296 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174694400 unmapped: 47415296 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174702592 unmapped: 47407104 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174702592 unmapped: 47407104 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174702592 unmapped: 47407104 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174702592 unmapped: 47407104 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174702592 unmapped: 47407104 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174702592 unmapped: 47407104 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174710784 unmapped: 47398912 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174710784 unmapped: 47398912 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174710784 unmapped: 47398912 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174710784 unmapped: 47398912 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174710784 unmapped: 47398912 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174710784 unmapped: 47398912 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174710784 unmapped: 47398912 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174718976 unmapped: 47390720 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174718976 unmapped: 47390720 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174718976 unmapped: 47390720 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 47382528 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 47382528 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 47382528 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 47382528 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 47382528 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 47382528 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 47382528 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 47382528 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 47382528 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 47382528 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174735360 unmapped: 47374336 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174735360 unmapped: 47374336 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174735360 unmapped: 47374336 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174735360 unmapped: 47374336 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: bluestore.MempoolThread(0x556c7340bb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1917132 data_alloc: 218103808 data_used: 19001344
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174743552 unmapped: 47366144 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: do_command 'config diff' '{prefix=config diff}'
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174817280 unmapped: 47292416 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: do_command 'config show' '{prefix=config show}'
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: do_command 'counter dump' '{prefix=counter dump}'
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: do_command 'counter schema' '{prefix=counter schema}'
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174686208 unmapped: 47423488 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b77bc000/0x0/0x1bfc00000, data 0x15f9acd/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 47144960 heap: 222109696 old mem: 2845415832 new mem: 2845415832
Jan 26 13:54:18 np0005596060 ceph-osd[84834]: do_command 'log dump' '{prefix=log dump}'
Jan 26 13:54:18 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29750 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 26 13:54:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 26 13:54:18 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21564 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 26 13:54:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2572135050' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 26 13:54:18 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29762 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:18 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21582 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:18.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:18 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29777 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:18 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21594 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:18 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:18 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:18 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:18.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:18 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 26 13:54:18 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/898392719' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 26 13:54:19 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31315 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:19 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T18:54:19.033+0000 7f4edf408640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 13:54:19 np0005596060 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 13:54:19 np0005596060 nova_compute[247421]: 2026-01-26 18:54:19.135 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:19 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:19 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29792 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:19 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21606 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:19 np0005596060 podman[320244]: 2026-01-26 18:54:19.448426617 +0000 UTC m=+0.062667464 container health_status 60ee7702fe362c0975b98b6991cfe8b00f948b22d419cd91307f45ac62f3a72d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 13:54:19 np0005596060 podman[320247]: 2026-01-26 18:54:19.494465956 +0000 UTC m=+0.107928064 container health_status c8abf58dbc93964612d36244b68a06047bd5dda131b022c70e5e580e06402ec1 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '431ca7a7f3efd032b1aee96c3e0a533b29d789a5aae674c39b1aa51c9d150475-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc-a47f6367e2aa6fd37fad62d90a0f84eec63fc2b2f3ac85febc279d40d20616bc'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 13:54:19 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 26 13:54:19 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2240241902' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 26 13:54:20 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29813 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:20 np0005596060 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:20 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T18:54:20.048+0000 7f4edf408640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:20 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21624 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:20 np0005596060 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:20 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T18:54:20.192+0000 7f4edf408640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 26 13:54:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3485242821' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 26 13:54:20 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31366 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 26 13:54:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/462356512' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 26 13:54:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 26 13:54:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1795061498' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 26 13:54:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:20.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:20 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31375 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 26 13:54:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1962731879' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 26 13:54:20 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:20 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:20 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:20.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:20 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 26 13:54:20 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3458644554' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 26 13:54:21 np0005596060 nova_compute[247421]: 2026-01-26 18:54:21.054 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:21 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31399 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:21 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 26 13:54:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/802940379' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 26 13:54:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 26 13:54:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1944218089' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 26 13:54:21 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31423 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 26 13:54:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2986185465' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 26 13:54:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 26 13:54:21 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3607969335' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 26 13:54:21 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:54:21 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31435 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 26 13:54:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2505679431' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 26 13:54:22 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31450 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 26 13:54:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306137388' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 26 13:54:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 26 13:54:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/730623689' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 26 13:54:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:54:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:22.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:54:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 26 13:54:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1206779818' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 26 13:54:22 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31462 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:22 np0005596060 systemd[1]: Starting Hostname Service...
Jan 26 13:54:22 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 26 13:54:22 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/226095803' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 26 13:54:22 np0005596060 systemd[1]: Started Hostname Service.
Jan 26 13:54:22 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:22 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 26 13:54:22 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:22.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 26 13:54:23 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31474 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:23 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:23 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29951 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:23 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 26 13:54:23 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2134488545' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 26 13:54:23 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21750 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:23 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29960 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:23 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31489 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:23 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21762 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:23 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29966 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:23 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21768 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:23 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29975 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:24 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21780 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:24 np0005596060 nova_compute[247421]: 2026-01-26 18:54:24.137 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:24 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31510 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:24 np0005596060 ceph-d4cd1917-5876-51b6-bc64-65a16199754d-mgr-compute-0-mbryrf[74559]: 2026-01-26T18:54:24.272+0000 7f4edf408640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:24 np0005596060 ceph-mgr[74563]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 13:54:24 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.29993 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:24 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21792 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:24 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.30005 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:24.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:24 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 26 13:54:24 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/51680086' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 26 13:54:24 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21804 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:24 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:24 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:24 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:24.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:25 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.30020 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 26 13:54:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1277334078' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 26 13:54:25 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21819 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:25 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:25 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.30038 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 26 13:54:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991862697' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 26 13:54:25 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21840 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:25 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.30050 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:25 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 26 13:54:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857516314' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 26 13:54:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 13:54:25 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 13:54:26 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21852 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:26 np0005596060 nova_compute[247421]: 2026-01-26 18:54:26.102 247428 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 26 13:54:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 13:54:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 13:54:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 13:54:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 13:54:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 13:54:26 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 13:54:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.100 - anonymous [26/Jan/2026:18:54:26.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:26 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 13:54:26 np0005596060 radosgw[92919]: ====== starting new request req=0x7fc3285836f0 =====
Jan 26 13:54:26 np0005596060 radosgw[92919]: ====== req done req=0x7fc3285836f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 26 13:54:26 np0005596060 radosgw[92919]: beast: 0x7fc3285836f0: 192.168.122.102 - anonymous [26/Jan/2026:18:54:26.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 26 13:54:27 np0005596060 ceph-mgr[74563]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 41 MiB data, 416 MiB used, 21 GiB / 21 GiB avail
Jan 26 13:54:27 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 26 13:54:27 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/367121880' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 26 13:54:27 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.30140 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:27 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31663 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:27 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31669 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:27 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.21945 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:27 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31675 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 13:54:28 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31681 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 13:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 26 13:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/379960897' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 26 13:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 26 13:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 26 13:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 26 13:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:54:28 np0005596060 ceph-mon[74267]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 26 13:54:28 np0005596060 ceph-mon[74267]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/3479430344' entity='mgr.compute-0.mbryrf' 
Jan 26 13:54:28 np0005596060 ceph-mgr[74563]: log_channel(audit) log [DBG] : from='client.31690 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
